Plant seedlings classification using cnn¶

Problem Statement

Context¶

In recent times, the field of agriculture has been in urgent need of modernizing, since the amount of manual work people need to put in to check if plants are growing correctly is still highly extensive. Despite several advances in agricultural technology, people working in the agricultural industry still need to have the ability to sort and recognize different plants and weeds, which takes a lot of time and effort in the long term. The potential is ripe for this trillion-dollar industry to be greatly impacted by technological innovations that cut down on the requirement for manual labor, and this is where Artificial Intelligence can actually benefit the workers in this field, as the time and energy required to identify plant seedlings will be greatly shortened by the use of AI and Deep Learning. The ability to do so far more efficiently and even more effectively than experienced manual labor, could lead to better crop yields, the freeing up of human inolvement for higher-order agricultural decision making, and in the long term will result in more sustainable environmental practices in agriculture as well.

Objective¶

The aim of this project is to Build a Convolutional Neural Netowrk to classify plant seedlings into their respective categories.

Data Dictionary¶

The Aarhus University Signal Processing group, in collaboration with the University of Southern Denmark, has recently released a dataset containing images of unique plants belonging to 12 different species.

  • The dataset can be download from Olympus.
  • The data file names are:
    • images.npy
    • Labels.csv
  • Due to the large volume of data, the images were converted to the images.npy file and the labels are also put into Labels.csv, so that you can work on the data/project seamlessly without having to worry about the high data volume.

  • The goal of the project is to create a classifier capable of determining a plant's species from an image.

List of Species

  • Black-grass
  • Charlock
  • Cleavers
  • Common Chickweed
  • Common Wheat
  • Fat Hen
  • Loose Silky-bent
  • Maize
  • Scentless Mayweed
  • Shepherds Purse
  • Small-flowered Cranesbill
  • Sugar beet

Importing necessary libraries

In [ ]:
import os
import numpy as np                                                                               # Importing numpy for Matrix Operations
import pandas as pd                                                                              # Importing pandas to read CSV files
import matplotlib.pyplot as plt                                                                  # Importting matplotlib for Plotting and visualizing images
import math                                                                                      # Importing math module to perform mathematical operations
import cv2                                                                                       # Importing openCV for image processing
import seaborn as sns                                                                            # Importing seaborn to plot graphs


# Tensorflow modules
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator                              # Importing the ImageDataGenerator for data augmentation
from tensorflow.keras.models import Sequential                                                   # Importing the sequential module to define a sequential model
from tensorflow.keras.layers import Dense,Dropout,Flatten,Conv2D,MaxPooling2D,BatchNormalization # Defining all the layers to build our CNN Model
from tensorflow.keras import optimizers
from tensorflow.keras.optimizers import Adam,SGD                                                 # Importing the optimizers which can be used in our model
from keras import layers
from sklearn import preprocessing                                                                # Importing the preprocessing module to preprocess the data
from sklearn.model_selection import train_test_split                                             # Importing train_test_split function to split the data into train and test
from sklearn.metrics import confusion_matrix                                                     # Importing confusion_matrix to plot the confusion matrix
from sklearn.preprocessing import LabelBinarizer
from sklearn import metrics
# Display images using OpenCV
from google.colab.patches import cv2_imshow                                                      # Importing cv2_imshow from google.patches to display images
from sklearn.model_selection import train_test_split
from tensorflow.keras import backend
from keras.callbacks import ReduceLROnPlateau
from tensorflow.keras import callbacks
from tensorflow.keras.callbacks import EarlyStopping     # Dense: Just your regular densely-connected NN layer.
from IPython.display import display, Markdown

import random
from mpl_toolkits.axes_grid1 import ImageGrid

# Ignore warnings
import warnings
warnings.filterwarnings('ignore')

Loading the dataset

In [ ]:
from google.colab import drive
drive.mount('/content/drive')
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
In [ ]:
# Load the image file of dataset
images = np.load('/content/drive/MyDrive/images.npy', allow_pickle=True)      # Read the images

# Load the labels file of dataset
labels = pd.read_csv('/content/drive/MyDrive/Labels.csv')  # Read the Labels

Data Overview

Understand the shape of the dataset¶

In [ ]:
print(images.shape)         # Check the shape of images dataset
print(labels.shape)         # Check the shape of labels dataset
(4750, 128, 128, 3)
(4750, 1)
In [ ]:
# Getting all unique different categories
categ=np.unique(labels)
num_categ = len(categ)
print(categ)
print("Total categories:", num_categ)
['Black-grass' 'Charlock' 'Cleavers' 'Common Chickweed' 'Common wheat'
 'Fat Hen' 'Loose Silky-bent' 'Maize' 'Scentless Mayweed'
 'Shepherds Purse' 'Small-flowered Cranesbill' 'Sugar beet']
Total categories: 12

Observations:

  • There are 12 distinct plant categories.
  • The dataset contains a total of 4,750 plant images.
  • Each image has a resolution of 128 x 128 pixels

Exploratory Data Analysis

Plotting various plant categories on a 12x12 grid¶

In [ ]:
def plot_images(images,labels):
  num_classes=10                                                                  # Number of Classes
  categories=np.unique(labels)
  keys=dict(labels['Label'])                                                      # Obtaing the unique classes from y_train
  rows = 3                                                                        # Defining number of rows=3
  cols = 4                                                                        # Defining number of columns=4
  fig = plt.figure(figsize=(10, 8))                                               # Defining the figure size to 10x8
  for i in range(cols):
      for j in range(rows):
          random_index = np.random.randint(0, len(labels))                        # Generating random indices from the data and plotting the images
          ax = fig.add_subplot(rows, cols, i * rows + j + 1)                      # Adding subplots with 3 rows and 4 columns
          ax.imshow(images[random_index, :])                                      # Plotting the image
          ax.set_title(keys[random_index])
  plt.show()
In [ ]:
#defining a figure of size 12X12
fig = plt.figure(1, figsize=(num_categ, num_categ))
grid = ImageGrid(fig, 111, nrows_ncols=(num_categ, num_categ), axes_pad=0.05)
i = 0
index = labels.index

#Plottting 12 images from each plant category
for category_id, category in enumerate(categ):
  condition = labels["Label"] == category
  plant_indices = index[condition].tolist()
  for j in range(0,12):
      ax = grid[i]
      ax.imshow(images[plant_indices[j]])
      ax.axis('off')
      if i % num_categ == num_categ - 1:
        #printing the names for each caterogy
        ax.text(200, 70, category, verticalalignment='center')
      i += 1
plt.show();
In [ ]:
plot_images(images,labels)   # Input the images and labels to the function and plot the images with their labels

Checking the distribution of the target variable¶

In [ ]:
sns.countplot(x=labels['Label'])
plt.xticks(rotation='vertical')
plt.title('Distribution of Plant Seedling Categories')
plt.xlabel('Category')
plt.ylabel('Count')
plt.show()

Observations:

  1. Balanced Distribution:

    • The dataset exhibits a generally even distribution of samples across most categories, which fosters a robust machine learning model and mitigates category-specific biases.
  2. High Representation:

    • Certain categories, such as "Common Chickweed" and "Loose Silky-bent", boast a substantial number of samples, enhancing the model's accuracy in identifying these categories.
  3. Low Representation:

    • Low Representation: Conversely, categories like "Shepherds Purse" and "Common Wheat" have limited samples, potentially hindering the model's ability to learn and predict these categories accurately, which may result in lower accuracy for these classes.
  4. Data Augmentation:

    • To counteract the imbalance, applying data augmentation techniques to underrepresented categories can generate synthetic diversity, boosting the model's performance across all categories.
  5. Model Performance:

    • Model Performance: The overall distribution suggests a solid foundation for the model's learning, thanks to the sufficient data in most categories. However, ongoing monitoring and adjustments may be necessary to ensure balanced performance, particularly for categories with limited samples.

These observations highlight the strengths and potential areas for improvement in the dataset used for training the plant seedling classification model.

Data Pre-Processing

Resizing and applying Gaussian Blur on a single image and plotting

In [ ]:
# Resizing the image size to half ie., from 128X128 to 64X64
img = cv2.resize(images[1000],None,fx=0.50,fy=0.50)

#Applying Gaussian Blur
img_g = cv2.GaussianBlur(img,(3,3),0)

#Displaying preprocessed and original images
print("Resized to 50% and applied Gaussian Blurring with kernel size 3X3")
cv2_imshow(img_g)
print('\n')
print("Original Image of size 128X128")
cv2_imshow(images[1000])
Resized to 50% and applied Gaussian Blurring with kernel size 3X3

Original Image of size 128X128

Converting to HSV and applying mask for the background and focusing only on plant

In [ ]:
# Convert to HSV image
hsvImg = cv2.cvtColor(img_g, cv2.COLOR_BGR2HSV)
cv2_imshow(hsvImg)
In [ ]:
# Create mask (parameters - green color range)
lower_green = (25, 40, 50)
upper_green = (75, 255, 255)
mask = cv2.inRange(hsvImg, lower_green, upper_green)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11))
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)

# Create bool mask
bMask = mask > 0

# Apply the mask
clearImg = np.zeros_like(img, np.uint8)  # Create empty image
clearImg[bMask] = img[bMask]  # Apply boolean mask to the origin image

#Masked Image after removing the background
cv2_imshow(clearImg)

Applying Resize, Gaussian Blurr and Masking on All Images

In [ ]:
images_copy = images.copy()
In [ ]:
lower_green = (25, 40, 50)
upper_green = (75, 255, 255)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11))
preprocessed_data_color = []

for img in images:
  resize_img = cv2.resize(img,None,fx=0.50,fy=0.50)
  Gblur_img = cv2.GaussianBlur(resize_img,(3,3),0)
  hsv_img = cv2.cvtColor(Gblur_img, cv2.COLOR_BGR2HSV)
  mask = cv2.inRange(hsv_img, lower_green, upper_green)
  mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
  bMask = mask > 0
  clearImg = np.zeros_like(resize_img, np.uint8)  # Create empty image
  clearImg[bMask] = resize_img[bMask]  # Apply boolean mask to the original image
  # clearImg1 = cv2.cvtColor(clearImg,cv2.COLOR_BGR2GRAY)

  preprocessed_data_color.append(clearImg)

#Preprocessed all plant images
preprocessed_data_color = np.asarray(preprocessed_data_color)

Visualizing the preprocessed color plant images

In [ ]:
from mpl_toolkits.axes_grid1 import ImageGrid

fig = plt.figure(1, figsize=(num_categ, num_categ))
grid = ImageGrid(fig, 111, nrows_ncols=(num_categ, num_categ), axes_pad=0.05)
i = 0
index = labels.index

for category_id, category in enumerate(categ):
  condition = labels["Label"] == category
  plant_indices = index[condition].tolist()
  for j in range(0,12):
      ax = grid[i]
      # img = read_img(filepath, (224, 224))
      # ax.imshow(img / 255.)
      ax.imshow(preprocessed_data_color[plant_indices[j]]/255.)
      # ax[i].set_title(labels.iloc[i].to_list(),fontsize=7,rotation=45)
      ax.axis('off')
      if i % num_categ == num_categ - 1:
          ax.text(70, 30, category, verticalalignment='center')
      i += 1
plt.show();
In [ ]:
preprocessed_data_color.shape
Out[ ]:
(4750, 64, 64, 3)

Converting all color images to Grayscale images

In [ ]:
preprocessed_data_gs = []
for img in preprocessed_data_color:
  gi = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
  preprocessed_data_gs.append(gi)

preprocessed_data_gs = np.asarray(preprocessed_data_gs)
In [ ]:
preprocessed_data_gs.shape
Out[ ]:
(4750, 64, 64)

Visualizing the preprocessed Grayscale plant images

In [ ]:
fig = plt.figure(1, figsize=(num_categ, num_categ))
grid = ImageGrid(fig, 111, nrows_ncols=(num_categ, num_categ), axes_pad=0.05)
i = 0
index = labels.index

for category_id, category in enumerate(categ):
  condition = labels["Label"] == category
  plant_indices = index[condition].tolist()
  for j in range(0,12):
      ax = grid[i]
      # img = read_img(filepath, (224, 224))
      # ax.imshow(img / 255.)
      ax.imshow(preprocessed_data_gs[plant_indices[j]],cmap='gray',vmin=0, vmax=255)

      ax.axis('off')
      if i % num_categ == num_categ - 1:
          ax.text(70, 30, category, verticalalignment='center')
      i += 1
plt.show();

Normalization for Images

In [ ]:
preprocessed_data_gs = preprocessed_data_gs / 255.
preprocessed_data_color = preprocessed_data_color / 255.

Label Encoding and One-Hot encoding for Plant categories

In [ ]:
labels['Label'] = labels['Label'].astype('category')
labels['Label'] = labels['Label'].cat.codes
labels.value_counts()
Out[ ]:
Label
6        654
3        611
8        516
10       496
5        475
1        390
11       385
2        287
0        263
9        231
7        221
4        221
Name: count, dtype: int64
In [ ]:
from tensorflow.keras.utils import to_categorical

labels = to_categorical(labels, num_classes=12)

print("Shape of y_train:", labels.shape)
print("One value of y_train:", labels[0])
Shape of y_train: (4750, 12)
One value of y_train: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]

Model Building

Model Evaluation Criterion¶

The model can classify plant seedlings in two outcomes

  • Inaccurately classify plant seedlings into wrong category.
  • Accurately classify plant seedlings into correct category.

Which metric to optimize?

  • It is important that model should accurately classify plant seedlings into correct category. So we need to aim to maximize accuracy.

Model Building with Grayscale Images¶

Split the preprocessed_data_color into training, testing, and validation set

In [ ]:
from sklearn.model_selection import train_test_split

val_split = 0.25
#1st split into train and test
X_train, X_test1, y_train, y_test1 = train_test_split(preprocessed_data_gs, labels, test_size=0.30, stratify=labels,random_state = 42)

#2nd split into val and test
X_val, X_test, y_val, y_test = train_test_split(X_test1, y_test1, test_size=0.50, stratify=y_test1,random_state = 42)

Printing the shapes for all data splits

In [ ]:
print("X_train shape: ", X_train.shape)
print("y_train shape: ", y_train.shape)
print("X_val shape: ", X_val.shape)
print("y_val shape: ", y_val.shape)
print("X_test shape: ", X_test.shape)
print("y_test shape: ", y_test.shape)
X_train shape:  (3325, 64, 64)
y_train shape:  (3325, 12)
X_val shape:  (712, 64, 64)
y_val shape:  (712, 12)
X_test shape:  (713, 64, 64)
y_test shape:  (713, 12)

Observations:

  • Train dataset has 3325 plant images
  • Validation dataset has 712 plant images
  • Test dataset has 713 plant images
  • Plant images are in 64X64 shape with color channel

Reshaping data into shapes compatible with Keras models

In [ ]:
X_train = X_train.reshape(X_train.shape[0], 64, 64, 1)
X_val = X_val.reshape(X_val.shape[0], 64, 64, 1)
X_test = X_test.reshape(X_test.shape[0], 64, 64, 1)

Converting type to float

In [ ]:
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_val = X_val.astype('float32')

Using ImageDataGenerator for common data augmentation techniques

In [ ]:
from keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(shear_range = 0.2,rotation_range=180,  # randomly rotate images in the range
        zoom_range = 0.1, # Randomly zoom image
        width_shift_range=0.1,  # randomly shift images horizontally
        height_shift_range=0.1,  # randomly shift images vertically
        horizontal_flip=True,  # randomly flip images horizontally
        vertical_flip=True  # randomly flip images vertically
    )
In [ ]:
training_set = train_datagen.flow(X_train,y_train,batch_size=32,seed=42,shuffle=True)

Creating a CNN model containing multiple layers for image processing and dense layer for classification

In [ ]:
backend.clear_session()
#Fixing the seed for random number generators so that we can ensure we receive the same output everytime
np.random.seed(42)
import random
random.seed(42)
tf.random.set_seed(42)
In [ ]:
model1 = Sequential()

# Add a Convolution layer with 32 kernels of 3X3 shape with activation function ReLU
model1.add(Conv2D(32, (3, 3), input_shape = (64, 64, 1), activation = 'relu', padding = 'same'))
#Adding Batch Normalization
model1.add(layers.BatchNormalization())
# Add a Max Pooling layer of size 2X2
model1.add(MaxPooling2D(pool_size = (2, 2),strides=2))


# Add another Convolution layer with 32 kernels of 3X3 shape with activation function ReLU
model1.add(Conv2D(64, (3, 3), activation = 'relu', padding = 'same'))
model1.add(layers.BatchNormalization())
model1.add(MaxPooling2D(pool_size = (2, 2),strides=2))

# Add another Convolution layer with 32 kernels of 3X3 shape with activation function ReLU
model1.add(Conv2D(64, (3, 3), activation = 'relu', padding = 'valid')) #no Padding
model1.add(layers.BatchNormalization())
model1.add(MaxPooling2D(pool_size = (2, 2),strides=2))


# Flattening the layer before fully connected layers
model1.add(Flatten())

# Adding a fully connected layer with 512 neurons
model1.add(layers.BatchNormalization())
model1.add(Dense(units = 512, activation = 'elu'))

# Adding dropout with probability 0.2
model1.add(Dropout(0.2))


# Adding a fully connected layer with 128 neurons
model1.add(layers.BatchNormalization())
model1.add(Dense(units = 256, activation = 'elu'))
# model1.add(Dropout(0.2))


# The final output layer with 10 neurons to predict the categorical classifcation
model1.add(Dense(units = 12, activation = 'softmax'))

Using Adam Optimizer and Categorical cross entropy as loss fun. and metrics improvement is Accuracy

In [ ]:
# initiate Adam optimizer
adam_opt = optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model1.compile(optimizer = adam_opt, loss = 'categorical_crossentropy', metrics = ['accuracy'])

Printing Model Summary

In [ ]:
model1.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d (Conv2D)             (None, 64, 64, 32)        320       
                                                                 
 batch_normalization (Batch  (None, 64, 64, 32)        128       
 Normalization)                                                  
                                                                 
 max_pooling2d (MaxPooling2  (None, 32, 32, 32)        0         
 D)                                                              
                                                                 
 conv2d_1 (Conv2D)           (None, 32, 32, 64)        18496     
                                                                 
 batch_normalization_1 (Bat  (None, 32, 32, 64)        256       
 chNormalization)                                                
                                                                 
 max_pooling2d_1 (MaxPoolin  (None, 16, 16, 64)        0         
 g2D)                                                            
                                                                 
 conv2d_2 (Conv2D)           (None, 14, 14, 64)        36928     
                                                                 
 batch_normalization_2 (Bat  (None, 14, 14, 64)        256       
 chNormalization)                                                
                                                                 
 max_pooling2d_2 (MaxPoolin  (None, 7, 7, 64)          0         
 g2D)                                                            
                                                                 
 flatten (Flatten)           (None, 3136)              0         
                                                                 
 batch_normalization_3 (Bat  (None, 3136)              12544     
 chNormalization)                                                
                                                                 
 dense (Dense)               (None, 512)               1606144   
                                                                 
 dropout (Dropout)           (None, 512)               0         
                                                                 
 batch_normalization_4 (Bat  (None, 512)               2048      
 chNormalization)                                                
                                                                 
 dense_1 (Dense)             (None, 256)               131328    
                                                                 
 dense_2 (Dense)             (None, 12)                3084      
                                                                 
=================================================================
Total params: 1811532 (6.91 MB)
Trainable params: 1803916 (6.88 MB)
Non-trainable params: 7616 (29.75 KB)
_________________________________________________________________

Observations:

Model Summary:

  • Model type: Sequential
  • Number of layers: 13
  • Total parameters: 1,811,532 (6.91 MB)
  • Trainable parameters: 1,803,916 (6.88 MB)
  • Non-trainable parameters: 7,616 (29.75 KB)

Layer Types:

  • Conv2D: 3
  • Batch Normalization: 4
  • Max Pooling2D: 3
  • Flatten: 1
  • Dense: 3
  • Dropout: 1

EarlyStopping

In [ ]:
callback_es = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=20, min_delta=0.001, restore_best_weights=True)

Fitting the Classifier for Training set and validating for Validation set

In [ ]:
history1 = model1.fit(training_set,
               batch_size=32,
               epochs=500,
               validation_data = (X_val,y_val),
               shuffle=True,
               callbacks = [callback_es])
Epoch 1/500
104/104 [==============================] - 9s 26ms/step - loss: 2.1782 - accuracy: 0.3254 - val_loss: 12.7683 - val_accuracy: 0.0604
Epoch 2/500
104/104 [==============================] - 2s 19ms/step - loss: 1.5695 - accuracy: 0.4520 - val_loss: 14.6970 - val_accuracy: 0.0604
Epoch 3/500
104/104 [==============================] - 2s 20ms/step - loss: 1.4663 - accuracy: 0.4860 - val_loss: 12.0875 - val_accuracy: 0.0604
Epoch 4/500
104/104 [==============================] - 2s 20ms/step - loss: 1.3088 - accuracy: 0.5371 - val_loss: 12.0385 - val_accuracy: 0.0604
Epoch 5/500
104/104 [==============================] - 2s 20ms/step - loss: 1.2506 - accuracy: 0.5549 - val_loss: 8.3955 - val_accuracy: 0.0604
Epoch 6/500
104/104 [==============================] - 2s 20ms/step - loss: 1.1926 - accuracy: 0.5678 - val_loss: 3.9848 - val_accuracy: 0.1489
Epoch 7/500
104/104 [==============================] - 2s 19ms/step - loss: 1.1206 - accuracy: 0.6033 - val_loss: 1.4169 - val_accuracy: 0.4958
Epoch 8/500
104/104 [==============================] - 2s 19ms/step - loss: 1.0208 - accuracy: 0.6325 - val_loss: 3.2518 - val_accuracy: 0.1910
Epoch 9/500
104/104 [==============================] - 2s 19ms/step - loss: 1.0177 - accuracy: 0.6307 - val_loss: 2.2412 - val_accuracy: 0.4171
Epoch 10/500
104/104 [==============================] - 2s 19ms/step - loss: 0.9842 - accuracy: 0.6391 - val_loss: 1.7442 - val_accuracy: 0.4733
Epoch 11/500
104/104 [==============================] - 2s 19ms/step - loss: 0.9547 - accuracy: 0.6496 - val_loss: 1.0657 - val_accuracy: 0.6334
Epoch 12/500
104/104 [==============================] - 2s 20ms/step - loss: 0.9451 - accuracy: 0.6547 - val_loss: 1.5400 - val_accuracy: 0.4986
Epoch 13/500
104/104 [==============================] - 2s 19ms/step - loss: 0.9104 - accuracy: 0.6725 - val_loss: 1.2441 - val_accuracy: 0.6053
Epoch 14/500
104/104 [==============================] - 2s 19ms/step - loss: 0.8984 - accuracy: 0.6764 - val_loss: 2.7402 - val_accuracy: 0.3933
Epoch 15/500
104/104 [==============================] - 2s 19ms/step - loss: 0.8407 - accuracy: 0.6947 - val_loss: 4.6956 - val_accuracy: 0.2289
Epoch 16/500
104/104 [==============================] - 2s 20ms/step - loss: 0.8487 - accuracy: 0.7002 - val_loss: 1.3573 - val_accuracy: 0.6096
Epoch 17/500
104/104 [==============================] - 2s 19ms/step - loss: 0.7946 - accuracy: 0.7113 - val_loss: 1.6051 - val_accuracy: 0.4874
Epoch 18/500
104/104 [==============================] - 2s 20ms/step - loss: 0.8013 - accuracy: 0.7128 - val_loss: 2.2892 - val_accuracy: 0.3581
Epoch 19/500
104/104 [==============================] - 2s 19ms/step - loss: 0.7559 - accuracy: 0.7218 - val_loss: 1.4525 - val_accuracy: 0.5239
Epoch 20/500
104/104 [==============================] - 2s 19ms/step - loss: 0.7343 - accuracy: 0.7290 - val_loss: 2.5106 - val_accuracy: 0.4396
Epoch 21/500
104/104 [==============================] - 2s 19ms/step - loss: 0.7393 - accuracy: 0.7347 - val_loss: 5.1301 - val_accuracy: 0.3244
Epoch 22/500
104/104 [==============================] - 2s 19ms/step - loss: 0.7251 - accuracy: 0.7293 - val_loss: 2.2801 - val_accuracy: 0.3441
Epoch 23/500
104/104 [==============================] - 2s 20ms/step - loss: 0.7141 - accuracy: 0.7408 - val_loss: 1.7683 - val_accuracy: 0.5197
Epoch 24/500
104/104 [==============================] - 2s 19ms/step - loss: 0.7090 - accuracy: 0.7269 - val_loss: 1.7571 - val_accuracy: 0.4789
Epoch 25/500
104/104 [==============================] - 2s 19ms/step - loss: 0.6856 - accuracy: 0.7519 - val_loss: 3.5010 - val_accuracy: 0.3750
Epoch 26/500
104/104 [==============================] - 2s 19ms/step - loss: 0.7029 - accuracy: 0.7341 - val_loss: 1.9321 - val_accuracy: 0.4789
Epoch 27/500
104/104 [==============================] - 2s 19ms/step - loss: 0.6428 - accuracy: 0.7591 - val_loss: 1.2049 - val_accuracy: 0.6475
Epoch 28/500
104/104 [==============================] - 2s 19ms/step - loss: 0.6582 - accuracy: 0.7495 - val_loss: 1.5640 - val_accuracy: 0.5183
Epoch 29/500
104/104 [==============================] - 2s 20ms/step - loss: 0.6348 - accuracy: 0.7687 - val_loss: 2.4517 - val_accuracy: 0.3329
Epoch 30/500
104/104 [==============================] - 2s 20ms/step - loss: 0.6170 - accuracy: 0.7750 - val_loss: 1.7656 - val_accuracy: 0.4972
Epoch 31/500
104/104 [==============================] - 2s 19ms/step - loss: 0.6506 - accuracy: 0.7582 - val_loss: 1.9569 - val_accuracy: 0.5169
Epoch 32/500
104/104 [==============================] - 2s 19ms/step - loss: 0.6160 - accuracy: 0.7738 - val_loss: 1.0407 - val_accuracy: 0.6419
Epoch 33/500
104/104 [==============================] - 2s 19ms/step - loss: 0.6057 - accuracy: 0.7802 - val_loss: 0.9606 - val_accuracy: 0.6756
Epoch 34/500
104/104 [==============================] - 2s 19ms/step - loss: 0.6107 - accuracy: 0.7741 - val_loss: 0.9338 - val_accuracy: 0.7163
Epoch 35/500
104/104 [==============================] - 2s 19ms/step - loss: 0.6010 - accuracy: 0.7792 - val_loss: 3.2843 - val_accuracy: 0.2584
Epoch 36/500
104/104 [==============================] - 2s 20ms/step - loss: 0.5734 - accuracy: 0.7798 - val_loss: 0.8697 - val_accuracy: 0.7360
Epoch 37/500
104/104 [==============================] - 2s 19ms/step - loss: 0.5775 - accuracy: 0.7756 - val_loss: 0.6575 - val_accuracy: 0.7612
Epoch 38/500
104/104 [==============================] - 2s 19ms/step - loss: 0.5942 - accuracy: 0.7829 - val_loss: 1.7205 - val_accuracy: 0.5478
Epoch 39/500
104/104 [==============================] - 2s 19ms/step - loss: 0.5736 - accuracy: 0.7859 - val_loss: 1.3395 - val_accuracy: 0.5730
Epoch 40/500
104/104 [==============================] - 2s 19ms/step - loss: 0.5486 - accuracy: 0.7916 - val_loss: 1.3041 - val_accuracy: 0.6222
Epoch 41/500
104/104 [==============================] - 2s 19ms/step - loss: 0.5609 - accuracy: 0.7889 - val_loss: 2.0223 - val_accuracy: 0.4874
Epoch 42/500
104/104 [==============================] - 2s 20ms/step - loss: 0.5444 - accuracy: 0.7973 - val_loss: 0.6795 - val_accuracy: 0.7683
Epoch 43/500
104/104 [==============================] - 2s 20ms/step - loss: 0.5345 - accuracy: 0.7979 - val_loss: 1.4866 - val_accuracy: 0.6152
Epoch 44/500
104/104 [==============================] - 2s 19ms/step - loss: 0.5323 - accuracy: 0.8015 - val_loss: 0.7540 - val_accuracy: 0.7514
Epoch 45/500
104/104 [==============================] - 2s 19ms/step - loss: 0.5307 - accuracy: 0.8039 - val_loss: 1.3905 - val_accuracy: 0.5955
Epoch 46/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4994 - accuracy: 0.8078 - val_loss: 0.8000 - val_accuracy: 0.7346
Epoch 47/500
104/104 [==============================] - 2s 20ms/step - loss: 0.5104 - accuracy: 0.8081 - val_loss: 0.7425 - val_accuracy: 0.7514
Epoch 48/500
104/104 [==============================] - 2s 20ms/step - loss: 0.5418 - accuracy: 0.8003 - val_loss: 1.2211 - val_accuracy: 0.6376
Epoch 49/500
104/104 [==============================] - 2s 19ms/step - loss: 0.5384 - accuracy: 0.7955 - val_loss: 1.6102 - val_accuracy: 0.5969
Epoch 50/500
104/104 [==============================] - 2s 19ms/step - loss: 0.5386 - accuracy: 0.8012 - val_loss: 1.6606 - val_accuracy: 0.5576
Epoch 51/500
104/104 [==============================] - 2s 19ms/step - loss: 0.5007 - accuracy: 0.8150 - val_loss: 0.5834 - val_accuracy: 0.8006
Epoch 52/500
104/104 [==============================] - 2s 19ms/step - loss: 0.5044 - accuracy: 0.8096 - val_loss: 0.7459 - val_accuracy: 0.7556
Epoch 53/500
104/104 [==============================] - 2s 20ms/step - loss: 0.5046 - accuracy: 0.8042 - val_loss: 0.8462 - val_accuracy: 0.7317
Epoch 54/500
104/104 [==============================] - 2s 20ms/step - loss: 0.4809 - accuracy: 0.8132 - val_loss: 1.5861 - val_accuracy: 0.5688
Epoch 55/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4775 - accuracy: 0.8177 - val_loss: 0.7101 - val_accuracy: 0.7711
Epoch 56/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4760 - accuracy: 0.8162 - val_loss: 1.1212 - val_accuracy: 0.6629
Epoch 57/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4767 - accuracy: 0.8156 - val_loss: 0.6293 - val_accuracy: 0.7935
Epoch 58/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4759 - accuracy: 0.8217 - val_loss: 0.8971 - val_accuracy: 0.7444
Epoch 59/500
104/104 [==============================] - 2s 20ms/step - loss: 0.4572 - accuracy: 0.8202 - val_loss: 1.3419 - val_accuracy: 0.6461
Epoch 60/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4503 - accuracy: 0.8244 - val_loss: 1.0568 - val_accuracy: 0.6587
Epoch 61/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4643 - accuracy: 0.8217 - val_loss: 0.9027 - val_accuracy: 0.7430
Epoch 62/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4614 - accuracy: 0.8238 - val_loss: 1.4097 - val_accuracy: 0.6587
Epoch 63/500
104/104 [==============================] - 2s 20ms/step - loss: 0.4565 - accuracy: 0.8198 - val_loss: 1.0728 - val_accuracy: 0.6362
Epoch 64/500
104/104 [==============================] - 2s 20ms/step - loss: 0.4594 - accuracy: 0.8241 - val_loss: 0.9600 - val_accuracy: 0.6952
Epoch 65/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4488 - accuracy: 0.8280 - val_loss: 1.7568 - val_accuracy: 0.5941
Epoch 66/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4552 - accuracy: 0.8310 - val_loss: 0.7215 - val_accuracy: 0.7402
Epoch 67/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4461 - accuracy: 0.8340 - val_loss: 0.6771 - val_accuracy: 0.7711
Epoch 68/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4218 - accuracy: 0.8436 - val_loss: 2.1734 - val_accuracy: 0.4593
Epoch 69/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4559 - accuracy: 0.8292 - val_loss: 1.4160 - val_accuracy: 0.6320
Epoch 70/500
104/104 [==============================] - 2s 19ms/step - loss: 0.4312 - accuracy: 0.8367 - val_loss: 1.7031 - val_accuracy: 0.6166
Epoch 71/500
104/104 [==============================] - 2s 20ms/step - loss: 0.4303 - accuracy: 0.8463 - val_loss: 0.9202 - val_accuracy: 0.7261

Model accuracy on Validation data

In [ ]:
model1_accuracy_val = history1.history['accuracy'][np.argmin(history1.history['loss'])]
model1_accuracy_val
Out[ ]:
0.8436090350151062

Model accuracy on Test data

In [ ]:
model1_accuracy_test = model1.evaluate(X_test,y_test)[1]
model1_accuracy_test
23/23 [==============================] - 0s 7ms/step - loss: 0.5839 - accuracy: 0.7812
Out[ ]:
0.7812061905860901
In [ ]:
display(Markdown(f"""
**Observation:**

- Test Accuracy is {model1_accuracy_test * 100:.1f}%, which, while decent, suggests there is room for improvement in correctly classifying new, unseen data.
- Validation accuracy for least loss is {model1_accuracy_val * 100:.1f}%, showing that the model performs better on the validation set than on the test set, but it still does not exceed the high-80s, indicating average generalization capabilities.
"""))

Observation:

  • Test Accuracy is 78.1%, which, while decent, suggests there is room for improvement in correctly classifying new, unseen data.
  • Validation accuracy for least loss is 84.4%, showing that the model performs better on the validation set than on the test set, but it still does not exceed the high-80s, indicating average generalization capabilities.

Printing out the Confusion Matrix

In [ ]:
from sklearn.metrics import confusion_matrix
import itertools

def plot_confusion_matrix(cm, classes,
                          normalize=False,
                          title='Confusion matrix',
                          cmap=plt.cm.Greens):

    fig = plt.figure(figsize=(10,10))
    plt.imshow(cm, interpolation='nearest', cmap=cmap)
    plt.title(title)
    plt.colorbar()
    tick_marks = np.arange(len(classes))
    plt.xticks(tick_marks, classes, rotation=90)
    plt.yticks(tick_marks, classes)

    if normalize:
        cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]

    thresh = cm.max() / 2.
    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
        plt.text(j, i, cm[i, j],
                 horizontalalignment="center",
                 color="white" if cm[i, j] > thresh else "black")

    plt.tight_layout()
    plt.ylabel('True label')
    plt.xlabel('Predicted label')

# Predict the values from the validation dataset
predY1 = model1.predict(X_test)
predYClasses1 = np.argmax(predY1, axis = 1)
trueY = np.argmax(y_test, axis = 1)

# confusion matrix
confusionMTX = confusion_matrix(trueY, predYClasses1)

# plot the confusion matrix
plot_confusion_matrix(confusionMTX, classes = categ)
23/23 [==============================] - 0s 2ms/step

Observations:

  1. High Accuracy for Common Chickweed and Charlock: The model accurately predicts "Common Chickweed" (73) and "Charlock" (58) with high precision.
  2. Misclassifications in Loose Silky-bent: "Loose Silky-bent" is frequently misclassified as "Black-grass" and "Common Chickweed."
  3. Accurate Predictions for Maize and Scentless Mayweed: The model shows reliable predictions for "Maize" (32) and "Scentless Mayweed" (65).
  4. Confusion Among Similar Classes: Significant confusion exists between "Loose Silky-bent" and other similar classes.
  5. Moderate Overall Performance: The model exhibits mixed performance, excelling in some categories while struggling in others, indicating a need for further refinement.
In [ ]:
from sklearn.metrics import f1_score

print(f1_score(trueY, predYClasses1, average='macro')) # macro, take the average of each class’s F-1 score:
print(f1_score(trueY, predYClasses1, average='micro')) #micro calculates positive and negative values globally
print(f1_score(trueY, predYClasses1, average='weighted')) #F-1 scores are averaged by using the number of instances in a class as weight
print(f1_score(trueY, predYClasses1, average=None))
0.7581343219556552
0.7812061711079944
0.7809512102183226
[0.28571429 0.84057971 0.80555556 0.86390533 0.575      0.91503268
 0.6043956  0.91428571 0.8496732  0.63888889 0.93421053 0.87037037]

observation:

Above are the F1 scores based on various averaging methods

In [ ]:
from sklearn.metrics import classification_report

print(classification_report(trueY, predYClasses1, target_names=categ))
                           precision    recall  f1-score   support

              Black-grass       0.29      0.28      0.29        39
                 Charlock       0.72      1.00      0.84        58
                 Cleavers       1.00      0.67      0.81        43
         Common Chickweed       0.95      0.79      0.86        92
             Common wheat       0.49      0.70      0.57        33
                  Fat Hen       0.86      0.97      0.92        72
         Loose Silky-bent       0.65      0.56      0.60        98
                    Maize       0.86      0.97      0.91        33
        Scentless Mayweed       0.87      0.83      0.85        78
          Shepherds Purse       0.61      0.68      0.64        34
Small-flowered Cranesbill       0.92      0.95      0.93        75
               Sugar beet       0.94      0.81      0.87        58

                 accuracy                           0.78       713
                macro avg       0.76      0.77      0.76       713
             weighted avg       0.79      0.78      0.78       713

Observation:

  1. High Precision and Recall for Charlock: The model achieves perfect recall (1.00) and high precision (0.72) for "Charlock."
  2. Excellent Performance for Common Chickweed: "Common Chickweed" has a high f1-score of 0.86, with precision at 0.95 and recall at 0.79.
  3. Struggles with Black-grass: The model performs poorly on "Black-grass" with a low f1-score of 0.29.
  4. Balanced Performance for Most Classes: Most classes, such as "Fat Hen" and "Maize," have balanced precision and recall values around 0.85.
  5. Overall Model Accuracy: The model has an overall accuracy of 0.78, indicating moderate performance.

Plotting Loss and Accuracy for both Training and Validation sets

In [ ]:
plt.rcParams["figure.figsize"] = (7,6)
history_df = pd.DataFrame(history1.history)
history_df.loc[:, ['loss', 'val_loss']].plot(title="Cross-entropy")
history_df.loc[:, ['accuracy', 'val_accuracy']].plot(title="Accuracy")
Out[ ]:
<Axes: title={'center': 'Accuracy'}>

Observations:

  • The loss function is consistently decreasing, indicating a well-trained model.
  • The validation loss is closely aligned with the training loss, suggesting minimal overfitting or underfitting.
  • The accuracy on the validation set is comparable to the training accuracy, further indicating a well-balanced model.
  • The model is neither overfitting nor underfitting, as evidenced by the similar performance on both the validation and testing sets.

Saving Model and Weights

In [ ]:
model1.save('./classifier_color.h5')                     # save classifier (model) and architecture to single file
model1.save_weights('./classifier_color_weights.h5')

Conclusion:

We have built a CNN-model to predict the class of a plant, which works well but it still does not exceed the high-80s, indicating average generalization capabilities.

Model Building with preprocessed color Images¶

Split the preprocessed_data_color into training, testing, and validation set

In [ ]:
from sklearn.model_selection import train_test_split

val_split = 0.25
test_split = 0.30
random_state = 42

X_train, X_test1, y_train, y_test1 = train_test_split(
    preprocessed_data_color, labels, test_size=test_split, stratify=labels, random_state=random_state
)

X_val, X_test, y_val, y_test = train_test_split(
    X_test1, y_test1, test_size=0.50, stratify=y_test1, random_state=random_state
)

Printing the shapes for all data splits

In [ ]:
print("X_train shape: ", X_train.shape)
print("y_train shape: ", y_train.shape)
print("X_val shape: ", X_val.shape)
print("y_val shape: ", y_val.shape)
print("X_test shape: ", X_test.shape)
print("y_test shape: ", y_test.shape)
X_train shape:  (3325, 64, 64, 3)
y_train shape:  (3325, 12)
X_val shape:  (712, 64, 64, 3)
y_val shape:  (712, 12)
X_test shape:  (713, 64, 64, 3)
y_test shape:  (713, 12)

Observation:

  • X_train has 3325 plant images
  • X_val has 712 plant images
  • X_test has 713 plant images
  • Plan images are in 64X64 shape with color channel

Reshaping data into shapes compatible with Keras models

In [ ]:
X_train = X_train.reshape(X_train.shape[0], 64, 64, 3)
X_val = X_val.reshape(X_val.shape[0], 64, 64, 3)
X_test = X_test.reshape(X_test.shape[0], 64, 64, 3)

Converting type to float

In [ ]:
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_val = X_val.astype('float32')

Using ImageDataGenerator for common data augmentation techniques

In [ ]:
from keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(shear_range = 0.2,rotation_range=180,  # randomly rotate images in the range
        zoom_range = 0.1, # Randomly zoom image
        width_shift_range=0.1,  # randomly shift images horizontally
        height_shift_range=0.1,  # randomly shift images vertically
        horizontal_flip=True,  # randomly flip images horizontally
        vertical_flip=True  # randomly flip images vertically
    )
In [ ]:
training_set = train_datagen.flow(X_train,y_train,batch_size=32,seed=random_state,shuffle=True)

Creating a CNN model containing multiple layers for image processing and dense layer for classification

In [ ]:
backend.clear_session()
#Fixing the seed for random number generators so that we can ensure we receive the same output everytime
np.random.seed(42)
import random
random.seed(42)
tf.random.set_seed(42)
In [ ]:
# Initialising the CNN classifier
model2 = Sequential()

# Add a Convolution layer with 32 kernels of 3X3 shape with activation function ReLU
model2.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu', padding = 'same'))
#Adding Batch Normalization
model2.add(layers.BatchNormalization())
# Add a Max Pooling layer of size 2X2
model2.add(MaxPooling2D(pool_size = (2, 2),strides=2))


# Add another Convolution layer with 32 kernels of 3X3 shape with activation function ReLU
model2.add(Conv2D(64, (3, 3), activation = 'relu', padding = 'same'))
model2.add(layers.BatchNormalization())
model2.add(MaxPooling2D(pool_size = (2, 2),strides=2))

# Add another Convolution layer with 32 kernels of 3X3 shape with activation function ReLU
model2.add(Conv2D(64, (3, 3), activation = 'relu', padding = 'valid')) #no Padding
model2.add(layers.BatchNormalization())
model2.add(MaxPooling2D(pool_size = (2, 2),strides=2))


# Flattening the layer before fully connected layers
model2.add(Flatten())

# Adding a fully connected layer with 512 neurons
model2.add(layers.BatchNormalization())
model2.add(Dense(units = 512, activation = 'relu'))

# Adding dropout with probability 0.2
model2.add(Dropout(0.2))


# Adding a fully connected layer with 128 neurons
model2.add(layers.BatchNormalization())
model2.add(Dense(units = 128, activation = 'relu'))
model2.add(Dropout(0.2))


# The final output layer with 10 neurons to predict the categorical classifcation
model2.add(Dense(units = 12, activation = 'softmax'))

Using Adam Optimizer and Categorical cross entropy as loss fun. and metrics improvement is Accuracy

In [ ]:
# initiate Adam optimizer
adam_opt = optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model2.compile(optimizer = adam_opt, loss = 'categorical_crossentropy', metrics = ['accuracy'])

Printing Model Summary

In [ ]:
model2.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d (Conv2D)             (None, 64, 64, 32)        896       
                                                                 
 batch_normalization (Batch  (None, 64, 64, 32)        128       
 Normalization)                                                  
                                                                 
 max_pooling2d (MaxPooling2  (None, 32, 32, 32)        0         
 D)                                                              
                                                                 
 conv2d_1 (Conv2D)           (None, 32, 32, 64)        18496     
                                                                 
 batch_normalization_1 (Bat  (None, 32, 32, 64)        256       
 chNormalization)                                                
                                                                 
 max_pooling2d_1 (MaxPoolin  (None, 16, 16, 64)        0         
 g2D)                                                            
                                                                 
 conv2d_2 (Conv2D)           (None, 14, 14, 64)        36928     
                                                                 
 batch_normalization_2 (Bat  (None, 14, 14, 64)        256       
 chNormalization)                                                
                                                                 
 max_pooling2d_2 (MaxPoolin  (None, 7, 7, 64)          0         
 g2D)                                                            
                                                                 
 flatten (Flatten)           (None, 3136)              0         
                                                                 
 batch_normalization_3 (Bat  (None, 3136)              12544     
 chNormalization)                                                
                                                                 
 dense (Dense)               (None, 512)               1606144   
                                                                 
 dropout (Dropout)           (None, 512)               0         
                                                                 
 batch_normalization_4 (Bat  (None, 512)               2048      
 chNormalization)                                                
                                                                 
 dense_1 (Dense)             (None, 128)               65664     
                                                                 
 dropout_1 (Dropout)         (None, 128)               0         
                                                                 
 dense_2 (Dense)             (None, 12)                1548      
                                                                 
=================================================================
Total params: 1744908 (6.66 MB)
Trainable params: 1737292 (6.63 MB)
Non-trainable params: 7616 (29.75 KB)
_________________________________________________________________

EarlyStopping

In [ ]:
callback_es = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=20, min_delta=0.0001, restore_best_weights=True)

Fitting the Classifier for Training set and validating for Validation set

In [ ]:
history2 = model2.fit(training_set,
               batch_size=32,
               epochs=500,
               validation_data = (X_val,y_val),
               shuffle=True,
               callbacks = [callback_es])
Epoch 1/500
104/104 [==============================] - 8s 43ms/step - loss: 1.9048 - accuracy: 0.3850 - val_loss: 7.4421 - val_accuracy: 0.0604
Epoch 2/500
104/104 [==============================] - 4s 42ms/step - loss: 1.2850 - accuracy: 0.5570 - val_loss: 11.1210 - val_accuracy: 0.0604
Epoch 3/500
104/104 [==============================] - 4s 41ms/step - loss: 1.1071 - accuracy: 0.6186 - val_loss: 16.0394 - val_accuracy: 0.0604
Epoch 4/500
104/104 [==============================] - 4s 40ms/step - loss: 0.9447 - accuracy: 0.6797 - val_loss: 14.8906 - val_accuracy: 0.0604
Epoch 5/500
104/104 [==============================] - 4s 40ms/step - loss: 0.8780 - accuracy: 0.6908 - val_loss: 9.4866 - val_accuracy: 0.0730
Epoch 6/500
104/104 [==============================] - 4s 40ms/step - loss: 0.8334 - accuracy: 0.7128 - val_loss: 4.1177 - val_accuracy: 0.1742
Epoch 7/500
104/104 [==============================] - 4s 40ms/step - loss: 0.7268 - accuracy: 0.7594 - val_loss: 1.8574 - val_accuracy: 0.5323
Epoch 8/500
104/104 [==============================] - 4s 40ms/step - loss: 0.6862 - accuracy: 0.7636 - val_loss: 1.6036 - val_accuracy: 0.5941
Epoch 9/500
104/104 [==============================] - 4s 40ms/step - loss: 0.6563 - accuracy: 0.7717 - val_loss: 0.9318 - val_accuracy: 0.6980
Epoch 10/500
104/104 [==============================] - 4s 40ms/step - loss: 0.5991 - accuracy: 0.7940 - val_loss: 0.8892 - val_accuracy: 0.7303
Epoch 11/500
104/104 [==============================] - 4s 40ms/step - loss: 0.5840 - accuracy: 0.7877 - val_loss: 1.2983 - val_accuracy: 0.6798
Epoch 12/500
104/104 [==============================] - 4s 40ms/step - loss: 0.5897 - accuracy: 0.7976 - val_loss: 0.7899 - val_accuracy: 0.7472
Epoch 13/500
104/104 [==============================] - 4s 39ms/step - loss: 0.5200 - accuracy: 0.8159 - val_loss: 1.3282 - val_accuracy: 0.6952
Epoch 14/500
104/104 [==============================] - 4s 40ms/step - loss: 0.5142 - accuracy: 0.8093 - val_loss: 0.9157 - val_accuracy: 0.7500
Epoch 15/500
104/104 [==============================] - 4s 39ms/step - loss: 0.4808 - accuracy: 0.8301 - val_loss: 1.4918 - val_accuracy: 0.6306
Epoch 16/500
104/104 [==============================] - 4s 39ms/step - loss: 0.4780 - accuracy: 0.8292 - val_loss: 1.2579 - val_accuracy: 0.7121
Epoch 17/500
104/104 [==============================] - 4s 40ms/step - loss: 0.4770 - accuracy: 0.8271 - val_loss: 1.4692 - val_accuracy: 0.6671
Epoch 18/500
104/104 [==============================] - 4s 40ms/step - loss: 0.4659 - accuracy: 0.8367 - val_loss: 0.6717 - val_accuracy: 0.7809
Epoch 19/500
104/104 [==============================] - 4s 39ms/step - loss: 0.4173 - accuracy: 0.8496 - val_loss: 1.3707 - val_accuracy: 0.7135
Epoch 20/500
104/104 [==============================] - 4s 40ms/step - loss: 0.4141 - accuracy: 0.8517 - val_loss: 0.6268 - val_accuracy: 0.8118
Epoch 21/500
104/104 [==============================] - 4s 40ms/step - loss: 0.4142 - accuracy: 0.8511 - val_loss: 1.2318 - val_accuracy: 0.6896
Epoch 22/500
104/104 [==============================] - 4s 39ms/step - loss: 0.4240 - accuracy: 0.8478 - val_loss: 0.7416 - val_accuracy: 0.7584
Epoch 23/500
104/104 [==============================] - 4s 40ms/step - loss: 0.3999 - accuracy: 0.8523 - val_loss: 0.6145 - val_accuracy: 0.7935
Epoch 24/500
104/104 [==============================] - 4s 40ms/step - loss: 0.4007 - accuracy: 0.8481 - val_loss: 1.0979 - val_accuracy: 0.7444
Epoch 25/500
104/104 [==============================] - 4s 39ms/step - loss: 0.3576 - accuracy: 0.8686 - val_loss: 3.5313 - val_accuracy: 0.4635
Epoch 26/500
104/104 [==============================] - 4s 40ms/step - loss: 0.3775 - accuracy: 0.8595 - val_loss: 0.6364 - val_accuracy: 0.8202
Epoch 27/500
104/104 [==============================] - 4s 40ms/step - loss: 0.3761 - accuracy: 0.8665 - val_loss: 0.7440 - val_accuracy: 0.8230
Epoch 28/500
104/104 [==============================] - 4s 40ms/step - loss: 0.3509 - accuracy: 0.8710 - val_loss: 1.8647 - val_accuracy: 0.6390
Epoch 29/500
104/104 [==============================] - 4s 40ms/step - loss: 0.3568 - accuracy: 0.8734 - val_loss: 0.8103 - val_accuracy: 0.7795
Epoch 30/500
104/104 [==============================] - 4s 39ms/step - loss: 0.3572 - accuracy: 0.8734 - val_loss: 0.5902 - val_accuracy: 0.7949
Epoch 31/500
104/104 [==============================] - 4s 39ms/step - loss: 0.3552 - accuracy: 0.8650 - val_loss: 6.0704 - val_accuracy: 0.4270
Epoch 32/500
104/104 [==============================] - 4s 40ms/step - loss: 0.3632 - accuracy: 0.8680 - val_loss: 1.0262 - val_accuracy: 0.7261
Epoch 33/500
104/104 [==============================] - 4s 40ms/step - loss: 0.3370 - accuracy: 0.8734 - val_loss: 0.7507 - val_accuracy: 0.7823
Epoch 34/500
104/104 [==============================] - 4s 40ms/step - loss: 0.3368 - accuracy: 0.8749 - val_loss: 1.0946 - val_accuracy: 0.7065
Epoch 35/500
104/104 [==============================] - 4s 40ms/step - loss: 0.3421 - accuracy: 0.8782 - val_loss: 0.5504 - val_accuracy: 0.8399
Epoch 36/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2967 - accuracy: 0.8875 - val_loss: 0.7536 - val_accuracy: 0.7767
Epoch 37/500
104/104 [==============================] - 4s 40ms/step - loss: 0.3351 - accuracy: 0.8728 - val_loss: 1.2481 - val_accuracy: 0.6924
Epoch 38/500
104/104 [==============================] - 4s 40ms/step - loss: 0.3112 - accuracy: 0.8857 - val_loss: 0.4874 - val_accuracy: 0.8469
Epoch 39/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2930 - accuracy: 0.8935 - val_loss: 0.4800 - val_accuracy: 0.8820
Epoch 40/500
104/104 [==============================] - 4s 40ms/step - loss: 0.3048 - accuracy: 0.8836 - val_loss: 0.4232 - val_accuracy: 0.8806
Epoch 41/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2998 - accuracy: 0.8914 - val_loss: 0.9521 - val_accuracy: 0.7037
Epoch 42/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2935 - accuracy: 0.8878 - val_loss: 0.7686 - val_accuracy: 0.7640
Epoch 43/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2586 - accuracy: 0.8950 - val_loss: 1.0113 - val_accuracy: 0.7331
Epoch 44/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2925 - accuracy: 0.8866 - val_loss: 0.3706 - val_accuracy: 0.8876
Epoch 45/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2777 - accuracy: 0.8941 - val_loss: 1.4704 - val_accuracy: 0.6728
Epoch 46/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2755 - accuracy: 0.8956 - val_loss: 0.8833 - val_accuracy: 0.7669
Epoch 47/500
104/104 [==============================] - 4s 41ms/step - loss: 0.2692 - accuracy: 0.8944 - val_loss: 0.4558 - val_accuracy: 0.8680
Epoch 48/500
104/104 [==============================] - 4s 39ms/step - loss: 0.2868 - accuracy: 0.8896 - val_loss: 0.8197 - val_accuracy: 0.7823
Epoch 49/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2975 - accuracy: 0.8908 - val_loss: 1.3778 - val_accuracy: 0.6882
Epoch 50/500
104/104 [==============================] - 4s 39ms/step - loss: 0.3138 - accuracy: 0.8884 - val_loss: 0.9615 - val_accuracy: 0.7584
Epoch 51/500
104/104 [==============================] - 4s 39ms/step - loss: 0.2845 - accuracy: 0.8944 - val_loss: 0.7659 - val_accuracy: 0.8048
Epoch 52/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2729 - accuracy: 0.8992 - val_loss: 0.3823 - val_accuracy: 0.8806
Epoch 53/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2689 - accuracy: 0.8983 - val_loss: 0.9168 - val_accuracy: 0.7093
Epoch 54/500
104/104 [==============================] - 4s 39ms/step - loss: 0.2538 - accuracy: 0.9032 - val_loss: 1.3953 - val_accuracy: 0.7121
Epoch 55/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2533 - accuracy: 0.9098 - val_loss: 2.7934 - val_accuracy: 0.6025
Epoch 56/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2786 - accuracy: 0.8905 - val_loss: 0.3451 - val_accuracy: 0.9031
Epoch 57/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2635 - accuracy: 0.8926 - val_loss: 1.2967 - val_accuracy: 0.7008
Epoch 58/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2538 - accuracy: 0.9008 - val_loss: 0.7627 - val_accuracy: 0.7949
Epoch 59/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2491 - accuracy: 0.9038 - val_loss: 1.2746 - val_accuracy: 0.6615
Epoch 60/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2675 - accuracy: 0.9002 - val_loss: 0.7224 - val_accuracy: 0.8146
Epoch 61/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2506 - accuracy: 0.9068 - val_loss: 2.2133 - val_accuracy: 0.6138
Epoch 62/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2606 - accuracy: 0.9005 - val_loss: 0.5029 - val_accuracy: 0.8553
Epoch 63/500
104/104 [==============================] - 4s 39ms/step - loss: 0.2368 - accuracy: 0.9056 - val_loss: 0.3574 - val_accuracy: 0.8862
Epoch 64/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2231 - accuracy: 0.9140 - val_loss: 0.5377 - val_accuracy: 0.8455
Epoch 65/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2358 - accuracy: 0.9065 - val_loss: 0.6373 - val_accuracy: 0.8230
Epoch 66/500
104/104 [==============================] - 4s 39ms/step - loss: 0.2484 - accuracy: 0.9032 - val_loss: 1.0679 - val_accuracy: 0.6742
Epoch 67/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2675 - accuracy: 0.9011 - val_loss: 3.9020 - val_accuracy: 0.4284
Epoch 68/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2454 - accuracy: 0.9020 - val_loss: 0.6899 - val_accuracy: 0.8006
Epoch 69/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2415 - accuracy: 0.9122 - val_loss: 0.4356 - val_accuracy: 0.8820
Epoch 70/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2255 - accuracy: 0.9098 - val_loss: 0.3266 - val_accuracy: 0.9129
Epoch 71/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2266 - accuracy: 0.9074 - val_loss: 0.8805 - val_accuracy: 0.7416
Epoch 72/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2327 - accuracy: 0.9125 - val_loss: 0.8356 - val_accuracy: 0.8076
Epoch 73/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2451 - accuracy: 0.9080 - val_loss: 1.1925 - val_accuracy: 0.7346
Epoch 74/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2169 - accuracy: 0.9221 - val_loss: 0.5595 - val_accuracy: 0.8315
Epoch 75/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2112 - accuracy: 0.9176 - val_loss: 0.6357 - val_accuracy: 0.8413
Epoch 76/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2451 - accuracy: 0.9059 - val_loss: 0.5946 - val_accuracy: 0.8427
Epoch 77/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2313 - accuracy: 0.9137 - val_loss: 1.1050 - val_accuracy: 0.7107
Epoch 78/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2300 - accuracy: 0.9098 - val_loss: 0.4009 - val_accuracy: 0.8834
Epoch 79/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2285 - accuracy: 0.9143 - val_loss: 0.5457 - val_accuracy: 0.8666
Epoch 80/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2059 - accuracy: 0.9203 - val_loss: 0.4618 - val_accuracy: 0.8230
Epoch 81/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2175 - accuracy: 0.9149 - val_loss: 1.0320 - val_accuracy: 0.7612
Epoch 82/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2238 - accuracy: 0.9086 - val_loss: 0.7050 - val_accuracy: 0.7781
Epoch 83/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2153 - accuracy: 0.9104 - val_loss: 0.5484 - val_accuracy: 0.8792
Epoch 84/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2099 - accuracy: 0.9179 - val_loss: 0.4607 - val_accuracy: 0.8750
Epoch 85/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2216 - accuracy: 0.9182 - val_loss: 0.5627 - val_accuracy: 0.8610
Epoch 86/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2039 - accuracy: 0.9224 - val_loss: 0.5892 - val_accuracy: 0.8441
Epoch 87/500
104/104 [==============================] - 4s 40ms/step - loss: 0.1874 - accuracy: 0.9299 - val_loss: 1.0060 - val_accuracy: 0.7472
Epoch 88/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2143 - accuracy: 0.9143 - val_loss: 1.1789 - val_accuracy: 0.7935
Epoch 89/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2050 - accuracy: 0.9164 - val_loss: 0.7747 - val_accuracy: 0.8469
Epoch 90/500
104/104 [==============================] - 4s 40ms/step - loss: 0.2125 - accuracy: 0.9164 - val_loss: 0.7295 - val_accuracy: 0.8132

Model accuracy on Validation data

In [ ]:
model2_accuracy_val = history2.history['accuracy'][np.argmin(history2.history['loss'])]
model2_accuracy_val
Out[ ]:
0.9299247860908508

Model accuracy on Test data

In [ ]:
model2_accuracy_test = model2.evaluate(X_test,y_test)[1]
model2_accuracy_test
23/23 [==============================] - 0s 4ms/step - loss: 0.2942 - accuracy: 0.9102
Out[ ]:
0.9102384448051453
In [ ]:
display(Markdown(f"""
**Observation:**

- Test Accuracy is {model2_accuracy_test * 100:.1f}% which is quite well and higher than the test accuracies of the other model
- Validation model accuracy for least loss is {model2_accuracy_val * 100:.1f}% which is quite well and higher than the validation accuracies of the other model
"""))

Observation:

  • Test Accuracy is 91.0% which is quite well and higher than the test accuracies of the other model
  • Validation model accuracy for least loss is 93.0% which is quite well and higher than the validation accuracies of the other model

Printing out the Confusion Matrix

In [ ]:
from sklearn.metrics import confusion_matrix
import itertools

def plot_confusion_matrix(cm, classes,
                          normalize=False,
                          title='Confusion matrix',
                          cmap=plt.cm.Greens):

    fig = plt.figure(figsize=(10,10))
    plt.imshow(cm, interpolation='nearest', cmap=cmap)
    plt.title(title)
    plt.colorbar()
    tick_marks = np.arange(len(classes))
    plt.xticks(tick_marks, classes, rotation=90)
    plt.yticks(tick_marks, classes)

    if normalize:
        cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]

    thresh = cm.max() / 2.
    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
        plt.text(j, i, cm[i, j],
                 horizontalalignment="center",
                 color="white" if cm[i, j] > thresh else "black")

    plt.tight_layout()
    plt.ylabel('True label')
    plt.xlabel('Predicted label')

# Predict the values from the validation dataset
predY2 = model2.predict(X_test)
predYClasses2 = np.argmax(predY2, axis = 1)
trueY = np.argmax(y_test, axis = 1)

# confusion matrix
confusionMTX = confusion_matrix(trueY, predYClasses2)

# plot the confusion matrix
plot_confusion_matrix(confusionMTX, classes = categ)
23/23 [==============================] - 0s 3ms/step

Observations:

  1. High Accuracy for Common Chickweed: The model correctly predicts 87 instances of "Common Chickweed" with minimal misclassifications.
  2. Strong Performance for Charlock: The model shows excellent precision for "Charlock" with 58 correct predictions and no misclassifications.
  3. Confusion in Loose Silky-bent: "Loose Silky-bent" is often misclassified as "Black-grass," indicating a challenge in differentiating these two classes.
  4. Misclassifications for Black-grass: "Black-grass" shows significant misclassification, particularly as "Loose Silky-bent" and other categories.
  5. Reliable Prediction for Scentless Mayweed: "Scentless Mayweed" is predicted accurately with 76 correct predictions and very few misclassifications.
In [ ]:
from sklearn.metrics import f1_score

print(f1_score(trueY, predYClasses2, average='macro')) # macro, take the average of each class’s F-1 score:
print(f1_score(trueY, predYClasses2, average='micro')) #micro calculates positive and negative values globally
print(f1_score(trueY, predYClasses2, average='weighted')) #F-1 scores are averaged by using the number of instances in a class as weight
print(f1_score(trueY, predYClasses2, average=None))
0.8941531755218479
0.9102384291725105
0.9082638566994887
[0.43478261 0.97478992 0.92857143 0.97206704 0.86842105 0.95238095
 0.82352941 0.96969697 0.96815287 0.90909091 0.97260274 0.95575221]

observation:

Above are the F1 scores based on various averaging methods

In [ ]:
from sklearn.metrics import classification_report

print(classification_report(trueY, predYClasses2, target_names=categ))
                           precision    recall  f1-score   support

              Black-grass       0.50      0.38      0.43        39
                 Charlock       0.95      1.00      0.97        58
                 Cleavers       0.95      0.91      0.93        43
         Common Chickweed       1.00      0.95      0.97        92
             Common wheat       0.77      1.00      0.87        33
                  Fat Hen       0.93      0.97      0.95        72
         Loose Silky-bent       0.79      0.86      0.82        98
                    Maize       0.97      0.97      0.97        33
        Scentless Mayweed       0.96      0.97      0.97        78
          Shepherds Purse       0.94      0.88      0.91        34
Small-flowered Cranesbill       1.00      0.95      0.97        75
               Sugar beet       0.98      0.93      0.96        58

                 accuracy                           0.91       713
                macro avg       0.90      0.90      0.89       713
             weighted avg       0.91      0.91      0.91       713

Observations:

  • The model's performance on Black-grass is suboptimal, with a low recall and precision below 0.80.
  • The confusion matrix also indicates poor performance on Black-grass, suggesting the model struggles to accurately classify this class.
  • In contrast, other classes exhibit a better balance between precision and recall, resulting in good F1 scores.
  • Despite the challenges with Black-grass, the overall accuracy of the model is still impressive, indicating strength in classifying other classes.

Plotting Loss and Accuracy for both Training and Validation sets

In [ ]:
plt.rcParams["figure.figsize"] = (7,6)


history_df.loc[:, ['loss', 'val_loss']].plot(title="Cross-entropy")
history_df.loc[:, ['accuracy', 'val_accuracy']].plot(title="Accuracy")
Out[ ]:
<Axes: title={'center': 'Accuracy'}>

Observations:

  • Loss is decreasing and val loss is close to training loss
  • Accuracy of val set is also close to training accuracy
  • No overfitting or underfitting observerd based on the scores of val and testing sets

Saving Model and Weights

In [ ]:
model2.save('./classifier_color.h5')                     # save classifier (model) and architecture to single file
model2.save_weights('./classifier_color_weights.h5')

Conclusion:

We have built a CNN-model to predict the class of a plant, which works quite well. (Increasing number of epochs and/or adding layers to a model can even increase the performance). CNN with Batch Normalization, Maxpooling, dropouts + Dense layers is a good combination for image classification

Model Building through Transfer Learning using VGG16¶

We will be using the idea of Transfer Learning. We will be loading a pre-built architecture - VGG16, which was trained on the ImageNet dataset and is the runner-up in the ImageNet competition in 2014.

For training VGG16, we will directly use the convolutional and pooling layers and freeze their weights i.e. no training will be done on them. For classification, we will replace the existing fully-connected layers with FC layers created specifically for our problem.

In [ ]:
imported_model= tf.keras.applications.ResNet50(include_top=False,
input_shape=(64,64,3),
pooling='avg',classes=12,
weights='imagenet')

imported_model.summary()
Model: "resnet50"
__________________________________________________________________________________________________
 Layer (type)                Output Shape                 Param #   Connected to                  
==================================================================================================
 input_1 (InputLayer)        [(None, 64, 64, 3)]          0         []                            
                                                                                                  
 conv1_pad (ZeroPadding2D)   (None, 70, 70, 3)            0         ['input_1[0][0]']             
                                                                                                  
 conv1_conv (Conv2D)         (None, 32, 32, 64)           9472      ['conv1_pad[0][0]']           
                                                                                                  
 conv1_bn (BatchNormalizati  (None, 32, 32, 64)           256       ['conv1_conv[0][0]']          
 on)                                                                                              
                                                                                                  
 conv1_relu (Activation)     (None, 32, 32, 64)           0         ['conv1_bn[0][0]']            
                                                                                                  
 pool1_pad (ZeroPadding2D)   (None, 34, 34, 64)           0         ['conv1_relu[0][0]']          
                                                                                                  
 pool1_pool (MaxPooling2D)   (None, 16, 16, 64)           0         ['pool1_pad[0][0]']           
                                                                                                  
 conv2_block1_1_conv (Conv2  (None, 16, 16, 64)           4160      ['pool1_pool[0][0]']          
 D)                                                                                               
                                                                                                  
 conv2_block1_1_bn (BatchNo  (None, 16, 16, 64)           256       ['conv2_block1_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv2_block1_1_relu (Activ  (None, 16, 16, 64)           0         ['conv2_block1_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv2_block1_2_conv (Conv2  (None, 16, 16, 64)           36928     ['conv2_block1_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv2_block1_2_bn (BatchNo  (None, 16, 16, 64)           256       ['conv2_block1_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv2_block1_2_relu (Activ  (None, 16, 16, 64)           0         ['conv2_block1_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv2_block1_0_conv (Conv2  (None, 16, 16, 256)          16640     ['pool1_pool[0][0]']          
 D)                                                                                               
                                                                                                  
 conv2_block1_3_conv (Conv2  (None, 16, 16, 256)          16640     ['conv2_block1_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv2_block1_0_bn (BatchNo  (None, 16, 16, 256)          1024      ['conv2_block1_0_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv2_block1_3_bn (BatchNo  (None, 16, 16, 256)          1024      ['conv2_block1_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv2_block1_add (Add)      (None, 16, 16, 256)          0         ['conv2_block1_0_bn[0][0]',   
                                                                     'conv2_block1_3_bn[0][0]']   
                                                                                                  
 conv2_block1_out (Activati  (None, 16, 16, 256)          0         ['conv2_block1_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv2_block2_1_conv (Conv2  (None, 16, 16, 64)           16448     ['conv2_block1_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv2_block2_1_bn (BatchNo  (None, 16, 16, 64)           256       ['conv2_block2_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv2_block2_1_relu (Activ  (None, 16, 16, 64)           0         ['conv2_block2_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv2_block2_2_conv (Conv2  (None, 16, 16, 64)           36928     ['conv2_block2_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv2_block2_2_bn (BatchNo  (None, 16, 16, 64)           256       ['conv2_block2_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv2_block2_2_relu (Activ  (None, 16, 16, 64)           0         ['conv2_block2_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv2_block2_3_conv (Conv2  (None, 16, 16, 256)          16640     ['conv2_block2_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv2_block2_3_bn (BatchNo  (None, 16, 16, 256)          1024      ['conv2_block2_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv2_block2_add (Add)      (None, 16, 16, 256)          0         ['conv2_block1_out[0][0]',    
                                                                     'conv2_block2_3_bn[0][0]']   
                                                                                                  
 conv2_block2_out (Activati  (None, 16, 16, 256)          0         ['conv2_block2_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv2_block3_1_conv (Conv2  (None, 16, 16, 64)           16448     ['conv2_block2_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv2_block3_1_bn (BatchNo  (None, 16, 16, 64)           256       ['conv2_block3_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv2_block3_1_relu (Activ  (None, 16, 16, 64)           0         ['conv2_block3_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv2_block3_2_conv (Conv2  (None, 16, 16, 64)           36928     ['conv2_block3_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv2_block3_2_bn (BatchNo  (None, 16, 16, 64)           256       ['conv2_block3_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv2_block3_2_relu (Activ  (None, 16, 16, 64)           0         ['conv2_block3_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv2_block3_3_conv (Conv2  (None, 16, 16, 256)          16640     ['conv2_block3_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv2_block3_3_bn (BatchNo  (None, 16, 16, 256)          1024      ['conv2_block3_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv2_block3_add (Add)      (None, 16, 16, 256)          0         ['conv2_block2_out[0][0]',    
                                                                     'conv2_block3_3_bn[0][0]']   
                                                                                                  
 conv2_block3_out (Activati  (None, 16, 16, 256)          0         ['conv2_block3_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv3_block1_1_conv (Conv2  (None, 8, 8, 128)            32896     ['conv2_block3_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv3_block1_1_bn (BatchNo  (None, 8, 8, 128)            512       ['conv3_block1_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv3_block1_1_relu (Activ  (None, 8, 8, 128)            0         ['conv3_block1_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv3_block1_2_conv (Conv2  (None, 8, 8, 128)            147584    ['conv3_block1_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv3_block1_2_bn (BatchNo  (None, 8, 8, 128)            512       ['conv3_block1_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv3_block1_2_relu (Activ  (None, 8, 8, 128)            0         ['conv3_block1_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv3_block1_0_conv (Conv2  (None, 8, 8, 512)            131584    ['conv2_block3_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv3_block1_3_conv (Conv2  (None, 8, 8, 512)            66048     ['conv3_block1_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv3_block1_0_bn (BatchNo  (None, 8, 8, 512)            2048      ['conv3_block1_0_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv3_block1_3_bn (BatchNo  (None, 8, 8, 512)            2048      ['conv3_block1_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv3_block1_add (Add)      (None, 8, 8, 512)            0         ['conv3_block1_0_bn[0][0]',   
                                                                     'conv3_block1_3_bn[0][0]']   
                                                                                                  
 conv3_block1_out (Activati  (None, 8, 8, 512)            0         ['conv3_block1_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv3_block2_1_conv (Conv2  (None, 8, 8, 128)            65664     ['conv3_block1_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv3_block2_1_bn (BatchNo  (None, 8, 8, 128)            512       ['conv3_block2_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv3_block2_1_relu (Activ  (None, 8, 8, 128)            0         ['conv3_block2_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv3_block2_2_conv (Conv2  (None, 8, 8, 128)            147584    ['conv3_block2_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv3_block2_2_bn (BatchNo  (None, 8, 8, 128)            512       ['conv3_block2_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv3_block2_2_relu (Activ  (None, 8, 8, 128)            0         ['conv3_block2_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv3_block2_3_conv (Conv2  (None, 8, 8, 512)            66048     ['conv3_block2_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv3_block2_3_bn (BatchNo  (None, 8, 8, 512)            2048      ['conv3_block2_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv3_block2_add (Add)      (None, 8, 8, 512)            0         ['conv3_block1_out[0][0]',    
                                                                     'conv3_block2_3_bn[0][0]']   
                                                                                                  
 conv3_block2_out (Activati  (None, 8, 8, 512)            0         ['conv3_block2_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv3_block3_1_conv (Conv2  (None, 8, 8, 128)            65664     ['conv3_block2_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv3_block3_1_bn (BatchNo  (None, 8, 8, 128)            512       ['conv3_block3_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv3_block3_1_relu (Activ  (None, 8, 8, 128)            0         ['conv3_block3_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv3_block3_2_conv (Conv2  (None, 8, 8, 128)            147584    ['conv3_block3_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv3_block3_2_bn (BatchNo  (None, 8, 8, 128)            512       ['conv3_block3_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv3_block3_2_relu (Activ  (None, 8, 8, 128)            0         ['conv3_block3_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv3_block3_3_conv (Conv2  (None, 8, 8, 512)            66048     ['conv3_block3_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv3_block3_3_bn (BatchNo  (None, 8, 8, 512)            2048      ['conv3_block3_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv3_block3_add (Add)      (None, 8, 8, 512)            0         ['conv3_block2_out[0][0]',    
                                                                     'conv3_block3_3_bn[0][0]']   
                                                                                                  
 conv3_block3_out (Activati  (None, 8, 8, 512)            0         ['conv3_block3_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv3_block4_1_conv (Conv2  (None, 8, 8, 128)            65664     ['conv3_block3_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv3_block4_1_bn (BatchNo  (None, 8, 8, 128)            512       ['conv3_block4_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv3_block4_1_relu (Activ  (None, 8, 8, 128)            0         ['conv3_block4_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv3_block4_2_conv (Conv2  (None, 8, 8, 128)            147584    ['conv3_block4_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv3_block4_2_bn (BatchNo  (None, 8, 8, 128)            512       ['conv3_block4_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv3_block4_2_relu (Activ  (None, 8, 8, 128)            0         ['conv3_block4_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv3_block4_3_conv (Conv2  (None, 8, 8, 512)            66048     ['conv3_block4_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv3_block4_3_bn (BatchNo  (None, 8, 8, 512)            2048      ['conv3_block4_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv3_block4_add (Add)      (None, 8, 8, 512)            0         ['conv3_block3_out[0][0]',    
                                                                     'conv3_block4_3_bn[0][0]']   
                                                                                                  
 conv3_block4_out (Activati  (None, 8, 8, 512)            0         ['conv3_block4_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv4_block1_1_conv (Conv2  (None, 4, 4, 256)            131328    ['conv3_block4_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv4_block1_1_bn (BatchNo  (None, 4, 4, 256)            1024      ['conv4_block1_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block1_1_relu (Activ  (None, 4, 4, 256)            0         ['conv4_block1_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv4_block1_2_conv (Conv2  (None, 4, 4, 256)            590080    ['conv4_block1_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv4_block1_2_bn (BatchNo  (None, 4, 4, 256)            1024      ['conv4_block1_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block1_2_relu (Activ  (None, 4, 4, 256)            0         ['conv4_block1_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv4_block1_0_conv (Conv2  (None, 4, 4, 1024)           525312    ['conv3_block4_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv4_block1_3_conv (Conv2  (None, 4, 4, 1024)           263168    ['conv4_block1_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv4_block1_0_bn (BatchNo  (None, 4, 4, 1024)           4096      ['conv4_block1_0_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block1_3_bn (BatchNo  (None, 4, 4, 1024)           4096      ['conv4_block1_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block1_add (Add)      (None, 4, 4, 1024)           0         ['conv4_block1_0_bn[0][0]',   
                                                                     'conv4_block1_3_bn[0][0]']   
                                                                                                  
 conv4_block1_out (Activati  (None, 4, 4, 1024)           0         ['conv4_block1_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv4_block2_1_conv (Conv2  (None, 4, 4, 256)            262400    ['conv4_block1_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv4_block2_1_bn (BatchNo  (None, 4, 4, 256)            1024      ['conv4_block2_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block2_1_relu (Activ  (None, 4, 4, 256)            0         ['conv4_block2_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv4_block2_2_conv (Conv2  (None, 4, 4, 256)            590080    ['conv4_block2_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv4_block2_2_bn (BatchNo  (None, 4, 4, 256)            1024      ['conv4_block2_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block2_2_relu (Activ  (None, 4, 4, 256)            0         ['conv4_block2_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv4_block2_3_conv (Conv2  (None, 4, 4, 1024)           263168    ['conv4_block2_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv4_block2_3_bn (BatchNo  (None, 4, 4, 1024)           4096      ['conv4_block2_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block2_add (Add)      (None, 4, 4, 1024)           0         ['conv4_block1_out[0][0]',    
                                                                     'conv4_block2_3_bn[0][0]']   
                                                                                                  
 conv4_block2_out (Activati  (None, 4, 4, 1024)           0         ['conv4_block2_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv4_block3_1_conv (Conv2  (None, 4, 4, 256)            262400    ['conv4_block2_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv4_block3_1_bn (BatchNo  (None, 4, 4, 256)            1024      ['conv4_block3_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block3_1_relu (Activ  (None, 4, 4, 256)            0         ['conv4_block3_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv4_block3_2_conv (Conv2  (None, 4, 4, 256)            590080    ['conv4_block3_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv4_block3_2_bn (BatchNo  (None, 4, 4, 256)            1024      ['conv4_block3_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block3_2_relu (Activ  (None, 4, 4, 256)            0         ['conv4_block3_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv4_block3_3_conv (Conv2  (None, 4, 4, 1024)           263168    ['conv4_block3_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv4_block3_3_bn (BatchNo  (None, 4, 4, 1024)           4096      ['conv4_block3_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block3_add (Add)      (None, 4, 4, 1024)           0         ['conv4_block2_out[0][0]',    
                                                                     'conv4_block3_3_bn[0][0]']   
                                                                                                  
 conv4_block3_out (Activati  (None, 4, 4, 1024)           0         ['conv4_block3_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv4_block4_1_conv (Conv2  (None, 4, 4, 256)            262400    ['conv4_block3_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv4_block4_1_bn (BatchNo  (None, 4, 4, 256)            1024      ['conv4_block4_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block4_1_relu (Activ  (None, 4, 4, 256)            0         ['conv4_block4_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv4_block4_2_conv (Conv2  (None, 4, 4, 256)            590080    ['conv4_block4_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv4_block4_2_bn (BatchNo  (None, 4, 4, 256)            1024      ['conv4_block4_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block4_2_relu (Activ  (None, 4, 4, 256)            0         ['conv4_block4_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv4_block4_3_conv (Conv2  (None, 4, 4, 1024)           263168    ['conv4_block4_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv4_block4_3_bn (BatchNo  (None, 4, 4, 1024)           4096      ['conv4_block4_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block4_add (Add)      (None, 4, 4, 1024)           0         ['conv4_block3_out[0][0]',    
                                                                     'conv4_block4_3_bn[0][0]']   
                                                                                                  
 conv4_block4_out (Activati  (None, 4, 4, 1024)           0         ['conv4_block4_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv4_block5_1_conv (Conv2  (None, 4, 4, 256)            262400    ['conv4_block4_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv4_block5_1_bn (BatchNo  (None, 4, 4, 256)            1024      ['conv4_block5_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block5_1_relu (Activ  (None, 4, 4, 256)            0         ['conv4_block5_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv4_block5_2_conv (Conv2  (None, 4, 4, 256)            590080    ['conv4_block5_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv4_block5_2_bn (BatchNo  (None, 4, 4, 256)            1024      ['conv4_block5_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block5_2_relu (Activ  (None, 4, 4, 256)            0         ['conv4_block5_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv4_block5_3_conv (Conv2  (None, 4, 4, 1024)           263168    ['conv4_block5_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv4_block5_3_bn (BatchNo  (None, 4, 4, 1024)           4096      ['conv4_block5_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block5_add (Add)      (None, 4, 4, 1024)           0         ['conv4_block4_out[0][0]',    
                                                                     'conv4_block5_3_bn[0][0]']   
                                                                                                  
 conv4_block5_out (Activati  (None, 4, 4, 1024)           0         ['conv4_block5_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv4_block6_1_conv (Conv2  (None, 4, 4, 256)            262400    ['conv4_block5_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv4_block6_1_bn (BatchNo  (None, 4, 4, 256)            1024      ['conv4_block6_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block6_1_relu (Activ  (None, 4, 4, 256)            0         ['conv4_block6_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv4_block6_2_conv (Conv2  (None, 4, 4, 256)            590080    ['conv4_block6_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv4_block6_2_bn (BatchNo  (None, 4, 4, 256)            1024      ['conv4_block6_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block6_2_relu (Activ  (None, 4, 4, 256)            0         ['conv4_block6_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv4_block6_3_conv (Conv2  (None, 4, 4, 1024)           263168    ['conv4_block6_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv4_block6_3_bn (BatchNo  (None, 4, 4, 1024)           4096      ['conv4_block6_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv4_block6_add (Add)      (None, 4, 4, 1024)           0         ['conv4_block5_out[0][0]',    
                                                                     'conv4_block6_3_bn[0][0]']   
                                                                                                  
 conv4_block6_out (Activati  (None, 4, 4, 1024)           0         ['conv4_block6_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv5_block1_1_conv (Conv2  (None, 2, 2, 512)            524800    ['conv4_block6_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv5_block1_1_bn (BatchNo  (None, 2, 2, 512)            2048      ['conv5_block1_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv5_block1_1_relu (Activ  (None, 2, 2, 512)            0         ['conv5_block1_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv5_block1_2_conv (Conv2  (None, 2, 2, 512)            2359808   ['conv5_block1_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv5_block1_2_bn (BatchNo  (None, 2, 2, 512)            2048      ['conv5_block1_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv5_block1_2_relu (Activ  (None, 2, 2, 512)            0         ['conv5_block1_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv5_block1_0_conv (Conv2  (None, 2, 2, 2048)           2099200   ['conv4_block6_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv5_block1_3_conv (Conv2  (None, 2, 2, 2048)           1050624   ['conv5_block1_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv5_block1_0_bn (BatchNo  (None, 2, 2, 2048)           8192      ['conv5_block1_0_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv5_block1_3_bn (BatchNo  (None, 2, 2, 2048)           8192      ['conv5_block1_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv5_block1_add (Add)      (None, 2, 2, 2048)           0         ['conv5_block1_0_bn[0][0]',   
                                                                     'conv5_block1_3_bn[0][0]']   
                                                                                                  
 conv5_block1_out (Activati  (None, 2, 2, 2048)           0         ['conv5_block1_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv5_block2_1_conv (Conv2  (None, 2, 2, 512)            1049088   ['conv5_block1_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv5_block2_1_bn (BatchNo  (None, 2, 2, 512)            2048      ['conv5_block2_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv5_block2_1_relu (Activ  (None, 2, 2, 512)            0         ['conv5_block2_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv5_block2_2_conv (Conv2  (None, 2, 2, 512)            2359808   ['conv5_block2_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv5_block2_2_bn (BatchNo  (None, 2, 2, 512)            2048      ['conv5_block2_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv5_block2_2_relu (Activ  (None, 2, 2, 512)            0         ['conv5_block2_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv5_block2_3_conv (Conv2  (None, 2, 2, 2048)           1050624   ['conv5_block2_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv5_block2_3_bn (BatchNo  (None, 2, 2, 2048)           8192      ['conv5_block2_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv5_block2_add (Add)      (None, 2, 2, 2048)           0         ['conv5_block1_out[0][0]',    
                                                                     'conv5_block2_3_bn[0][0]']   
                                                                                                  
 conv5_block2_out (Activati  (None, 2, 2, 2048)           0         ['conv5_block2_add[0][0]']    
 on)                                                                                              
                                                                                                  
 conv5_block3_1_conv (Conv2  (None, 2, 2, 512)            1049088   ['conv5_block2_out[0][0]']    
 D)                                                                                               
                                                                                                  
 conv5_block3_1_bn (BatchNo  (None, 2, 2, 512)            2048      ['conv5_block3_1_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv5_block3_1_relu (Activ  (None, 2, 2, 512)            0         ['conv5_block3_1_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv5_block3_2_conv (Conv2  (None, 2, 2, 512)            2359808   ['conv5_block3_1_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv5_block3_2_bn (BatchNo  (None, 2, 2, 512)            2048      ['conv5_block3_2_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv5_block3_2_relu (Activ  (None, 2, 2, 512)            0         ['conv5_block3_2_bn[0][0]']   
 ation)                                                                                           
                                                                                                  
 conv5_block3_3_conv (Conv2  (None, 2, 2, 2048)           1050624   ['conv5_block3_2_relu[0][0]'] 
 D)                                                                                               
                                                                                                  
 conv5_block3_3_bn (BatchNo  (None, 2, 2, 2048)           8192      ['conv5_block3_3_conv[0][0]'] 
 rmalization)                                                                                     
                                                                                                  
 conv5_block3_add (Add)      (None, 2, 2, 2048)           0         ['conv5_block2_out[0][0]',    
                                                                     'conv5_block3_3_bn[0][0]']   
                                                                                                  
 conv5_block3_out (Activati  (None, 2, 2, 2048)           0         ['conv5_block3_add[0][0]']    
 on)                                                                                              
                                                                                                  
 avg_pool (GlobalAveragePoo  (None, 2048)                 0         ['conv5_block3_out[0][0]']    
 ling2D)                                                                                          
                                                                                                  
==================================================================================================
Total params: 23587712 (89.98 MB)
Trainable params: 23534592 (89.78 MB)
Non-trainable params: 53120 (207.50 KB)
__________________________________________________________________________________________________
In [ ]:
# Making all the layers of the VGG model non-trainable. i.e. freezing them
for layer in imported_model.layers:
    layer.trainable = False
In [ ]:
backend.clear_session()
#Fixing the seed for random number generators so that we can ensure we receive the same output everytime
np.random.seed(42)
import random
random.seed(42)
tf.random.set_seed(42)
In [ ]:
# Initialising the CNN classifier
model3 = Sequential()

# Adding the convolutional part of the VGG16 model from above
model3.add(imported_model)

# Flattening the layer before fully connected layers
model3.add(Flatten())

# Adding a fully connected layer with 512 neurons
model3.add(layers.BatchNormalization())
model3.add(Dense(units = 512, activation = 'relu'))

# Adding dropout with probability 0.2
model3.add(Dropout(0.2))


# Adding a fully connected layer with 128 neurons
model3.add(layers.BatchNormalization())
model3.add(Dense(units = 128, activation = 'relu'))
model3.add(Dropout(0.2))


# The final output layer with 10 neurons to predict the categorical classifcation
model3.add(Dense(units = 12, activation = 'softmax'))

Using Adam Optimizer and Categorical cross entropy as loss fun. and metrics improvement is Accuracy

In [ ]:
# initiate Adam optimizer
adam_opt = optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model3.compile(optimizer = adam_opt, loss = 'categorical_crossentropy', metrics = ['accuracy'])

Printing Model Summary

In [ ]:
model3.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 resnet50 (Functional)       (None, 2048)              23587712  
                                                                 
 flatten (Flatten)           (None, 2048)              0         
                                                                 
 batch_normalization (Batch  (None, 2048)              8192      
 Normalization)                                                  
                                                                 
 dense (Dense)               (None, 512)               1049088   
                                                                 
 dropout (Dropout)           (None, 512)               0         
                                                                 
 batch_normalization_1 (Bat  (None, 512)               2048      
 chNormalization)                                                
                                                                 
 dense_1 (Dense)             (None, 128)               65664     
                                                                 
 dropout_1 (Dropout)         (None, 128)               0         
                                                                 
 dense_2 (Dense)             (None, 12)                1548      
                                                                 
=================================================================
Total params: 24714252 (94.28 MB)
Trainable params: 1121420 (4.28 MB)
Non-trainable params: 23592832 (90.00 MB)
_________________________________________________________________

EarlyStopping

In [ ]:
callback_es = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=20, min_delta=0.0001, restore_best_weights=True)

Fitting the Classifier for Training set and validating for Validation set

In [ ]:
print("Shape of X_train:", X_train.shape)
Shape of X_train: (3325, 64, 64, 3)
In [ ]:
history3 = model3.fit(training_set,
               batch_size=32,
               epochs=500,
               validation_data = (X_val,y_val),
               shuffle=True,
               callbacks = [callback_es])
Epoch 1/500
104/104 [==============================] - 12s 61ms/step - loss: 2.3316 - accuracy: 0.2611 - val_loss: 2.4766 - val_accuracy: 0.0941
Epoch 2/500
104/104 [==============================] - 4s 43ms/step - loss: 2.0309 - accuracy: 0.3272 - val_loss: 2.4393 - val_accuracy: 0.0997
Epoch 3/500
104/104 [==============================] - 5s 44ms/step - loss: 1.8931 - accuracy: 0.3546 - val_loss: 2.0994 - val_accuracy: 0.2612
Epoch 4/500
104/104 [==============================] - 4s 43ms/step - loss: 1.8841 - accuracy: 0.3585 - val_loss: 1.8263 - val_accuracy: 0.4059
Epoch 5/500
104/104 [==============================] - 5s 44ms/step - loss: 1.8043 - accuracy: 0.3747 - val_loss: 1.6927 - val_accuracy: 0.4326
Epoch 6/500
104/104 [==============================] - 5s 44ms/step - loss: 1.7639 - accuracy: 0.3820 - val_loss: 1.5678 - val_accuracy: 0.4649
Epoch 7/500
104/104 [==============================] - 5s 43ms/step - loss: 1.7270 - accuracy: 0.4063 - val_loss: 1.6433 - val_accuracy: 0.4410
Epoch 8/500
104/104 [==============================] - 5s 43ms/step - loss: 1.7071 - accuracy: 0.4075 - val_loss: 1.5358 - val_accuracy: 0.4579
Epoch 9/500
104/104 [==============================] - 5s 43ms/step - loss: 1.7029 - accuracy: 0.4021 - val_loss: 1.6117 - val_accuracy: 0.4354
Epoch 10/500
104/104 [==============================] - 4s 43ms/step - loss: 1.6555 - accuracy: 0.4262 - val_loss: 1.5505 - val_accuracy: 0.4551
Epoch 11/500
104/104 [==============================] - 5s 44ms/step - loss: 1.6882 - accuracy: 0.4159 - val_loss: 1.5106 - val_accuracy: 0.4635
Epoch 12/500
104/104 [==============================] - 5s 44ms/step - loss: 1.6467 - accuracy: 0.4205 - val_loss: 1.4922 - val_accuracy: 0.4705
Epoch 13/500
104/104 [==============================] - 4s 42ms/step - loss: 1.6314 - accuracy: 0.4217 - val_loss: 1.5489 - val_accuracy: 0.4410
Epoch 14/500
104/104 [==============================] - 4s 43ms/step - loss: 1.6130 - accuracy: 0.4355 - val_loss: 1.4743 - val_accuracy: 0.4551
Epoch 15/500
104/104 [==============================] - 5s 43ms/step - loss: 1.5856 - accuracy: 0.4412 - val_loss: 1.4667 - val_accuracy: 0.4902
Epoch 16/500
104/104 [==============================] - 4s 43ms/step - loss: 1.5946 - accuracy: 0.4571 - val_loss: 1.4657 - val_accuracy: 0.4874
Epoch 17/500
104/104 [==============================] - 4s 42ms/step - loss: 1.5872 - accuracy: 0.4448 - val_loss: 1.4972 - val_accuracy: 0.4565
Epoch 18/500
104/104 [==============================] - 5s 44ms/step - loss: 1.5875 - accuracy: 0.4397 - val_loss: 1.4319 - val_accuracy: 0.5098
Epoch 19/500
104/104 [==============================] - 4s 43ms/step - loss: 1.5698 - accuracy: 0.4608 - val_loss: 1.4667 - val_accuracy: 0.4860
Epoch 20/500
104/104 [==============================] - 4s 42ms/step - loss: 1.5732 - accuracy: 0.4547 - val_loss: 1.4859 - val_accuracy: 0.4803
Epoch 21/500
104/104 [==============================] - 4s 42ms/step - loss: 1.5613 - accuracy: 0.4523 - val_loss: 1.4168 - val_accuracy: 0.5098
Epoch 22/500
104/104 [==============================] - 4s 43ms/step - loss: 1.5489 - accuracy: 0.4529 - val_loss: 1.4362 - val_accuracy: 0.4874
Epoch 23/500
104/104 [==============================] - 4s 42ms/step - loss: 1.5531 - accuracy: 0.4574 - val_loss: 1.4595 - val_accuracy: 0.4817
Epoch 24/500
104/104 [==============================] - 4s 42ms/step - loss: 1.5409 - accuracy: 0.4629 - val_loss: 1.5624 - val_accuracy: 0.4396
Epoch 25/500
104/104 [==============================] - 4s 43ms/step - loss: 1.5430 - accuracy: 0.4695 - val_loss: 1.4262 - val_accuracy: 0.5056
Epoch 26/500
104/104 [==============================] - 4s 43ms/step - loss: 1.4975 - accuracy: 0.4764 - val_loss: 1.4501 - val_accuracy: 0.4789
Epoch 27/500
104/104 [==============================] - 5s 43ms/step - loss: 1.5129 - accuracy: 0.4734 - val_loss: 1.3762 - val_accuracy: 0.5449
Epoch 28/500
104/104 [==============================] - 4s 43ms/step - loss: 1.4804 - accuracy: 0.4752 - val_loss: 1.5458 - val_accuracy: 0.4719
Epoch 29/500
104/104 [==============================] - 4s 42ms/step - loss: 1.5037 - accuracy: 0.4701 - val_loss: 1.4006 - val_accuracy: 0.4944
Epoch 30/500
104/104 [==============================] - 4s 43ms/step - loss: 1.5005 - accuracy: 0.4710 - val_loss: 1.4473 - val_accuracy: 0.4902
Epoch 31/500
104/104 [==============================] - 4s 42ms/step - loss: 1.5114 - accuracy: 0.4827 - val_loss: 1.5140 - val_accuracy: 0.4761
Epoch 32/500
104/104 [==============================] - 4s 43ms/step - loss: 1.5052 - accuracy: 0.4692 - val_loss: 1.4710 - val_accuracy: 0.4775
Epoch 33/500
104/104 [==============================] - 4s 43ms/step - loss: 1.5054 - accuracy: 0.4668 - val_loss: 1.4427 - val_accuracy: 0.4916
Epoch 34/500
104/104 [==============================] - 4s 42ms/step - loss: 1.4892 - accuracy: 0.4701 - val_loss: 1.4392 - val_accuracy: 0.4902
Epoch 35/500
104/104 [==============================] - 4s 42ms/step - loss: 1.4750 - accuracy: 0.4776 - val_loss: 1.4832 - val_accuracy: 0.4831
Epoch 36/500
104/104 [==============================] - 5s 43ms/step - loss: 1.5158 - accuracy: 0.4737 - val_loss: 1.3905 - val_accuracy: 0.5112
Epoch 37/500
104/104 [==============================] - 4s 42ms/step - loss: 1.4742 - accuracy: 0.4635 - val_loss: 1.4977 - val_accuracy: 0.4747
Epoch 38/500
104/104 [==============================] - 4s 42ms/step - loss: 1.4776 - accuracy: 0.4869 - val_loss: 1.3918 - val_accuracy: 0.5098
Epoch 39/500
104/104 [==============================] - 4s 42ms/step - loss: 1.4762 - accuracy: 0.4800 - val_loss: 1.3885 - val_accuracy: 0.5112
Epoch 40/500
104/104 [==============================] - 5s 43ms/step - loss: 1.4595 - accuracy: 0.4845 - val_loss: 1.4253 - val_accuracy: 0.5042
Epoch 41/500
104/104 [==============================] - 4s 42ms/step - loss: 1.4677 - accuracy: 0.4794 - val_loss: 1.4207 - val_accuracy: 0.5042
Epoch 42/500
104/104 [==============================] - 4s 42ms/step - loss: 1.4586 - accuracy: 0.4794 - val_loss: 1.4244 - val_accuracy: 0.4972
Epoch 43/500
104/104 [==============================] - 4s 42ms/step - loss: 1.4313 - accuracy: 0.4995 - val_loss: 1.3695 - val_accuracy: 0.5112
Epoch 44/500
104/104 [==============================] - 4s 43ms/step - loss: 1.4420 - accuracy: 0.5026 - val_loss: 1.4264 - val_accuracy: 0.5169
Epoch 45/500
104/104 [==============================] - 4s 42ms/step - loss: 1.4597 - accuracy: 0.4875 - val_loss: 1.3915 - val_accuracy: 0.5183
Epoch 46/500
104/104 [==============================] - 4s 42ms/step - loss: 1.4374 - accuracy: 0.4881 - val_loss: 1.3986 - val_accuracy: 0.5154
Epoch 47/500
104/104 [==============================] - 5s 45ms/step - loss: 1.4344 - accuracy: 0.4854 - val_loss: 1.3369 - val_accuracy: 0.5407

Model accuracy on Validation data

In [ ]:
model3_accuracy_val = history3.history['accuracy'][np.argmin(history3.history['loss'])]
model3_accuracy_val
Out[ ]:
0.49954888224601746

Model accuracy on Test data

In [ ]:
model3_accuracy_test = model3.evaluate(X_test,y_test)[1]
model3_accuracy_test
23/23 [==============================] - 1s 22ms/step - loss: 1.3787 - accuracy: 0.5540
Out[ ]:
0.5539972186088562
In [ ]:
display(Markdown(f"""
**Observation:**

- Test Accuracy is {model3_accuracy_test * 100:.1f}% which is not that good and lower than the test accuracies of the other model
- Validation model accuracy for least loss is {model3_accuracy_val * 100:.1f}% which is not that good and lower than the test accuracies of the other model
"""))

Observation:

  • Test Accuracy is 55.4% which is not that good and lower than the test accuracies of the other model
  • Validation model accuracy for least loss is 50.0% which is not that good and lower than the test accuracies of the other model

Printing out the Confusion Matrix

In [ ]:
from sklearn.metrics import confusion_matrix
import itertools

def plot_confusion_matrix(cm, classes,
                          normalize=False,
                          title='Confusion matrix',
                          cmap=plt.cm.Greens):

    fig = plt.figure(figsize=(10,10))
    plt.imshow(cm, interpolation='nearest', cmap=cmap)
    plt.title(title)
    plt.colorbar()
    tick_marks = np.arange(len(classes))
    plt.xticks(tick_marks, classes, rotation=90)
    plt.yticks(tick_marks, classes)

    if normalize:
        cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]

    thresh = cm.max() / 2.
    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
        plt.text(j, i, cm[i, j],
                 horizontalalignment="center",
                 color="white" if cm[i, j] > thresh else "black")

    plt.tight_layout()
    plt.ylabel('True label')
    plt.xlabel('Predicted label')

# Predict the values from the validation dataset
predY3 = model3.predict(X_test)
predYClasses3 = np.argmax(predY3, axis = 1)
trueY = np.argmax(y_test, axis = 1)

# confusion matrix
confusionMTX = confusion_matrix(trueY, predYClasses3)

# plot the confusion matrix
plot_confusion_matrix(confusionMTX, classes = categ)
23/23 [==============================] - 2s 6ms/step

Observations:

  1. High Accuracy for Loose Silky-bent: The model correctly predicts 91 instances of "Loose Silky-bent" with minimal misclassifications.
  2. Misclassifications for Black-grass: "Black-grass" shows significant misclassification, particularly as "Scentless Mayweed" and other categories.
  3. Confusion in Scentless Mayweed: "Scentless Mayweed" is often misclassified, especially as "Cleavers" and "Shepherds Purse."
  4. Mixed Performance for Charlock: The model has good accuracy for "Charlock" with 45 correct predictions, but some misclassifications occur.
  5. Challenges with Fat Hen: The model shows notable misclassifications for "Fat Hen," indicating difficulty in differentiating this class from others.
In [ ]:
from sklearn.metrics import f1_score

print(f1_score(trueY, predYClasses3, average='macro')) # macro, take the average of each class’s F-1 score:
print(f1_score(trueY, predYClasses3, average='micro')) #micro calculates positive and negative values globally
print(f1_score(trueY, predYClasses3, average='weighted')) #F-1 scores are averaged by using the number of instances in a class as weight
print(f1_score(trueY, predYClasses3, average=None))
0.46906801227282297
0.5539971949509116
0.5203852314943914
[0.         0.50847458 0.54285714 0.59       0.         0.5862069
 0.69465649 0.63888889 0.39370079 0.35135135 0.7755102  0.54716981]

observations:

  • The F1 score is calculated for different averaging methods:
    • macro: takes the average of each class's F1 score (output: 0.469)
    • micro: calculates positive and negative values globally (output: 0.554)
    • weighted: averages F1 scores using the number of instances in each class as weight (output: 0.520)
    • None: returns the F1 score for each class separately (output: an array of 11 values)
  • The macro average F1 score is lower than the micro and weighted averages, indicating potential class imbalance issues.
  • The weighted average F1 score is closer to the micro average, suggesting that the class weights have a significant impact on the F1 score calculation.
In [ ]:
from sklearn.metrics import classification_report

print(classification_report(trueY, predYClasses3, target_names=categ))
                           precision    recall  f1-score   support

              Black-grass       0.00      0.00      0.00        39
                 Charlock       0.38      0.78      0.51        58
                 Cleavers       0.70      0.44      0.54        43
         Common Chickweed       0.55      0.64      0.59        92
             Common wheat       0.00      0.00      0.00        33
                  Fat Hen       0.77      0.47      0.59        72
         Loose Silky-bent       0.55      0.93      0.69        98
                    Maize       0.59      0.70      0.64        33
        Scentless Mayweed       0.51      0.32      0.39        78
          Shepherds Purse       0.33      0.38      0.35        34
Small-flowered Cranesbill       0.79      0.76      0.78        75
               Sugar beet       0.60      0.50      0.55        58

                 accuracy                           0.55       713
                macro avg       0.48      0.49      0.47       713
             weighted avg       0.53      0.55      0.52       713

Observations:

  • The model performs poorly on Black-grass and Common wheat, with 0 precision, recall, and F1 score.
  • Moderate performance is observed for most classes, with varying precision, recall, and F1 scores.
  • The overall accuracy is 0.55 (55% accurate), with the weighted average precision, recall, and F1 score being slightly higher than the macro average.

Plotting Loss and Accuracy for both Training and Validation sets

In [ ]:
plt.rcParams["figure.figsize"] = (7,6)


history_df.loc[:, ['loss', 'val_loss']].plot(title="Cross-entropy")
history_df.loc[:, ['accuracy', 'val_accuracy']].plot(title="Accuracy")
Out[ ]:
<Axes: title={'center': 'Accuracy'}>

Observation:

  • Loss is decreasing and val loss is close to training loss
  • Accuracy of val set is also close to training accuracy
  • No overfitting or underfitting observerd based on the scores of val and testing sets

Saving Model and Weights

In [ ]:
model3.save('./classifier_color.h5')                     # save classifier (model) and architecture to single file
model3.save_weights('./classifier_color_weights.h5')

Conclusion:

This CNN model to predict the class of a plant, but it has not worked well and needs significant improvement.

Comparison of Models and Final Selection

In [ ]:
df = pd.DataFrame({
    'Models': ['Grayscale images', 'Preprocessed color images', 'Transfer Learning using VGG16'],
    'Validation Accuracy': [f"{model1_accuracy_val * 100:.1f}%", f"{model2_accuracy_val * 100:.1f}%", f"{model3_accuracy_val * 100:.1f}%"],
    'Test Accuracy': [f"{model1_accuracy_test * 100:.1f}%", f"{model2_accuracy_test * 100:.1f}%", f"{model3_accuracy_test * 100:.1f}%"]
})
df
Out[ ]:
Models Validation Accuracy Test Accuracy
0 Grayscale images 84.4% 78.1%
1 Preprocessed color images 93.0% 91.0%
2 Transfer Learning using VGG16 50.0% 55.4%

Observations:

  • The Preprocessed color images model is selected as the final model due to its superior performance.
  • This model achieves a validation accuracy of 93.0% and a test accuracy of 91.0%, outperforming both the Grayscale images and Transfer Learning models.
  • The Preprocessed color images model demonstrates better generalization and robustness for image classification tasks, surpassing the Grayscale images model (84.4% validation accuracy, 78.1% test accuracy) and the Transfer Learning model using VGG16 (50.0% validation accuracy, 55.4% test accuracy).
In [ ]:
final_model = model2

Visualizing the prediction¶

In [ ]:
X_train_color, X_test1_color, y_train_color, y_test1_color = train_test_split(images, labels, test_size=0.30, stratify=labels,random_state = random_state)
X_val_color, X_test_color, y_val_color, y_test_color = train_test_split(X_test1_color, y_test1, test_size=0.50, stratify=y_test1,random_state = random_state)

pred_2 = np.argmax(final_model.predict(np.expand_dims(X_test[2],axis=0)),axis=1)
actual_2 = np.argmax(y_test[2])
print("Model predicted Category Name for X_test 2 is: ", categ[pred_2])
print("Actual Category Name for X_test 2 is: ",categ[actual_2] )
cv2_imshow(X_test[2]*255)
print("\n")
cv2_imshow(X_test_color[2])
print("--------------------------------------------------------------------------------------------------")
pred_3 = np.argmax(final_model.predict(np.expand_dims(X_test[3],axis=0)),axis=1)
actual_3 = np.argmax(y_test[3])
print("Model predicted Category Name for X_test 3 is: ", categ[pred_3])
print("Actual Category Name for X_test 3 is: ",categ[actual_3] )
cv2_imshow(X_test[3]*255)
print("\n")
cv2_imshow(X_test_color[3])
print("--------------------------------------------------------------------------------------------------")
pred_33 = np.argmax(final_model.predict(np.expand_dims(X_test[33],axis=0)),axis=1)
actual_33 = np.argmax(y_test[33])
print("Model predicted Category Name for X_test 33 is: ", categ[pred_33])
print("Actual Category Name for X_test 33 is: ",categ[actual_33] )
cv2_imshow(X_test[33]*255)
print("\n")
cv2_imshow(X_test_color[33])
print("--------------------------------------------------------------------------------------------------")
pred_36 = np.argmax(final_model.predict(np.expand_dims(X_test[36],axis=0)),axis=1)
actual_36 = np.argmax(y_test[36])
print("Model predicted Category Name for X_test 36 is: ", categ[pred_36])
print("Actual Category Name for X_test 36 is: ",categ[actual_36] )
cv2_imshow(X_test[36]*255)
print("\n")
cv2_imshow(X_test_color[36])
print("--------------------------------------------------------------------------------------------------")
pred_59 = np.argmax(final_model.predict(np.expand_dims(X_test[59],axis=0)),axis=1)
actual_59 = np.argmax(y_test[59])
print("Model predicted Category Name for X_test 59 is: ", categ[pred_59])
print("Actual Category Name for X_test 59 is: ",categ[actual_59] )
cv2_imshow(X_test[59]*255)
print("\n")
cv2_imshow(X_test_color[59])
print("--------------------------------------------------------------------------------------------------")
1/1 [==============================] - 0s 22ms/step
Model predicted Category Name for X_test 2 is:  ['Small-flowered Cranesbill']
Actual Category Name for X_test 2 is:  Small-flowered Cranesbill

--------------------------------------------------------------------------------------------------
1/1 [==============================] - 0s 21ms/step
Model predicted Category Name for X_test 3 is:  ['Charlock']
Actual Category Name for X_test 3 is:  Charlock

--------------------------------------------------------------------------------------------------
1/1 [==============================] - 0s 22ms/step
Model predicted Category Name for X_test 33 is:  ['Maize']
Actual Category Name for X_test 33 is:  Maize

--------------------------------------------------------------------------------------------------
1/1 [==============================] - 0s 25ms/step
Model predicted Category Name for X_test 36 is:  ['Loose Silky-bent']
Actual Category Name for X_test 36 is:  Loose Silky-bent

--------------------------------------------------------------------------------------------------
1/1 [==============================] - 0s 21ms/step
Model predicted Category Name for X_test 59 is:  ['Cleavers']
Actual Category Name for X_test 59 is:  Cleavers

--------------------------------------------------------------------------------------------------
In [ ]:
df = pd.DataFrame({
    'Image Sequence': ['2', '3', '33', '36'],
    'Model Predicted': [f"{categ[pred_2]}", f"{categ[pred_3]}", f"{categ[pred_33]}", f"{categ[pred_36]}"],
    'Actual Name': [f"{categ[actual_2]}", f"{categ[actual_3]}", f"{categ[actual_33]}", f"{categ[actual_36]}"],
})
df
Out[ ]:
Image Sequence Model Predicted Actual Name
0 2 ['Small-flowered Cranesbill'] Small-flowered Cranesbill
1 3 ['Charlock'] Charlock
2 33 ['Maize'] Maize
3 36 ['Loose Silky-bent'] Loose Silky-bent

Observations:

  • The model accurately predicts the names of various plants, demonstrating its robustness and reliability in plant species identification.
  • The model correctly classifies:
    • "Small-flowered Cranesbill" (Image Sequence 0)
    • "Charlock" (Image Sequence 1)
    • "Maize" (Image Sequence 2)
    • "Loose Silky-bent" (Image Sequence 3)
  • The model's predictions match the actual names, showcasing its accuracy and reliability in identifying different plant species.

Actionable Insights and Business Recommendations

Actionable Insights¶

  1. Model Performance:

    • The final model shows high accuracy, indicating effectiveness in classifying plant seedlings.
    • Some class imbalance observed; more data for underrepresented categories could improve performance.
    • Ensure high quality representative images for all classes. Consistent lighting, backgrounds, and image resolutions can help the model learn more effectively and improve the performance.
    • Fine-tune the models hyper parameters (e.g. learning rate, batch size) to enhance overall performance, especially for challenging classes.
    • Use ensemble technniques and cobine multiple models to improve strength and robustness.
    • Consider transfer learning with pretrained models to improve accuracy and efficiency
    • Implement more advanced feature extraction techniques to capture distinctive characterstics of the seedlings, such as edge detection, tecture analysis or color histograms. Regularly monitor model performance and update it with new data to adapt to any changes in the plant seedlings or their environment.
  2. Generalization:

    • Confusion matrix and classification report highlight areas where the model performs well and where it struggles, guiding targeted improvements.
  3. Data Augmentation:

    • Techniques like rotation, zoom, and flipping improved generalization. These should be continued and expanded.

Business Recommendations¶

Below recommendations aim to enhance the model's capabilities, promote user adoption, foster partnerships, and ensure continuous improvement to drive agricultural efficiency and informed decision-making.

Implementation

  • Deploy the model in a real-time plant monitoring system or mobile app for efficient classification.

Data Enhancement

  • Address class imbalance by collecting more data for underrepresented classes or using synthetic data generation techniques.
  • Consider transfer learning with pre-trained models to improve accuracy and efficiency.

User Adoption

  • Provide training sessions and educational materials to help users understand and utilize the model effectively.

Partnerships and Collaboration

  • Collaborate with agricultural research institutes and technology companies for validation, data collection, and integration.

Continuous Improvement

  • Set up continuous performance monitoring and establish a feedback loop for users to report issues and improve the model.